00:00:00.001 Started by upstream project "autotest-nightly" build number 3708 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3089 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.107 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.108 The recommended git tool is: git 00:00:00.108 using credential 00000000-0000-0000-0000-000000000002 00:00:00.109 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.157 Fetching changes from the remote Git repository 00:00:00.160 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.192 Using shallow fetch with depth 1 00:00:00.192 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.192 > git --version # timeout=10 00:00:00.230 > git --version # 'git version 2.39.2' 00:00:00.230 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.230 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.230 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.741 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.754 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.766 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:06.766 > git config core.sparsecheckout # timeout=10 00:00:06.777 > git read-tree -mu HEAD # timeout=10 00:00:06.794 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:06.815 Commit message: "inventory/dev: add missing long names" 00:00:06.815 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:06.905 [Pipeline] Start of Pipeline 00:00:06.920 [Pipeline] library 00:00:06.921 Loading library shm_lib@master 00:00:06.922 Library shm_lib@master is cached. Copying from home. 00:00:06.939 [Pipeline] node 00:00:06.948 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.949 [Pipeline] { 00:00:06.959 [Pipeline] catchError 00:00:06.961 [Pipeline] { 00:00:06.974 [Pipeline] wrap 00:00:06.983 [Pipeline] { 00:00:06.990 [Pipeline] stage 00:00:06.992 [Pipeline] { (Prologue) 00:00:07.160 [Pipeline] sh 00:00:07.444 + logger -p user.info -t JENKINS-CI 00:00:07.463 [Pipeline] echo 00:00:07.464 Node: CYP12 00:00:07.472 [Pipeline] sh 00:00:07.788 [Pipeline] setCustomBuildProperty 00:00:07.803 [Pipeline] echo 00:00:07.805 Cleanup processes 00:00:07.808 [Pipeline] sh 00:00:08.096 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.096 3868045 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.108 [Pipeline] sh 00:00:08.389 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.389 ++ grep -v 'sudo pgrep' 00:00:08.389 ++ awk '{print $1}' 00:00:08.389 + sudo kill -9 00:00:08.389 + true 00:00:08.408 [Pipeline] cleanWs 00:00:08.418 [WS-CLEANUP] Deleting project workspace... 00:00:08.418 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.424 [WS-CLEANUP] done 00:00:08.428 [Pipeline] setCustomBuildProperty 00:00:08.438 [Pipeline] sh 00:00:08.718 + sudo git config --global --replace-all safe.directory '*' 00:00:08.792 [Pipeline] nodesByLabel 00:00:08.794 Found a total of 1 nodes with the 'sorcerer' label 00:00:08.804 [Pipeline] httpRequest 00:00:08.809 HttpMethod: GET 00:00:08.810 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:08.813 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:08.835 Response Code: HTTP/1.1 200 OK 00:00:08.835 Success: Status code 200 is in the accepted range: 200,404 00:00:08.836 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:13.822 [Pipeline] sh 00:00:14.109 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:14.128 [Pipeline] httpRequest 00:00:14.133 HttpMethod: GET 00:00:14.133 URL: http://10.211.164.101/packages/spdk_40b11d96241a5b40eeb065071584c4ff1a645b70.tar.gz 00:00:14.134 Sending request to url: http://10.211.164.101/packages/spdk_40b11d96241a5b40eeb065071584c4ff1a645b70.tar.gz 00:00:14.163 Response Code: HTTP/1.1 200 OK 00:00:14.164 Success: Status code 200 is in the accepted range: 200,404 00:00:14.164 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_40b11d96241a5b40eeb065071584c4ff1a645b70.tar.gz 00:01:18.922 [Pipeline] sh 00:01:19.206 + tar --no-same-owner -xf spdk_40b11d96241a5b40eeb065071584c4ff1a645b70.tar.gz 00:01:22.520 [Pipeline] sh 00:01:22.835 + git -C spdk log --oneline -n5 00:01:22.835 40b11d962 lib/vhost: define timeout values when stopping a session 00:01:22.835 db19aa5bc Revert "dpdk/crypto: increase RTE_CRYPTO_MAX_DEVS to fit QAT SYM ..." 00:01:22.835 253cca4fc nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:22.835 c3870302f scripts/pkgdep: Fix install_shfmt() under FreeBSD 00:01:22.835 b65c4a87a scripts/pkgdep: Remove UADK from install_all_dependencies() 00:01:22.848 [Pipeline] } 00:01:22.866 [Pipeline] // stage 00:01:22.874 [Pipeline] stage 00:01:22.876 [Pipeline] { (Prepare) 00:01:22.893 [Pipeline] writeFile 00:01:22.908 [Pipeline] sh 00:01:23.196 + logger -p user.info -t JENKINS-CI 00:01:23.211 [Pipeline] sh 00:01:23.498 + logger -p user.info -t JENKINS-CI 00:01:23.511 [Pipeline] sh 00:01:23.797 + cat autorun-spdk.conf 00:01:23.797 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.797 SPDK_TEST_NVMF=1 00:01:23.797 SPDK_TEST_NVME_CLI=1 00:01:23.797 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.797 SPDK_TEST_NVMF_NICS=e810 00:01:23.797 SPDK_RUN_UBSAN=1 00:01:23.797 NET_TYPE=phy 00:01:23.805 RUN_NIGHTLY=1 00:01:23.811 [Pipeline] readFile 00:01:23.836 [Pipeline] withEnv 00:01:23.838 [Pipeline] { 00:01:23.852 [Pipeline] sh 00:01:24.137 + set -ex 00:01:24.137 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:24.137 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.137 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.137 ++ SPDK_TEST_NVMF=1 00:01:24.137 ++ SPDK_TEST_NVME_CLI=1 00:01:24.137 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.137 ++ SPDK_TEST_NVMF_NICS=e810 00:01:24.137 ++ SPDK_RUN_UBSAN=1 00:01:24.137 ++ NET_TYPE=phy 00:01:24.137 ++ RUN_NIGHTLY=1 00:01:24.137 + case $SPDK_TEST_NVMF_NICS in 00:01:24.137 + DRIVERS=ice 00:01:24.137 + [[ tcp == \r\d\m\a ]] 00:01:24.137 + [[ -n ice ]] 00:01:24.137 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:24.137 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:24.137 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:24.137 rmmod: ERROR: Module irdma is not currently loaded 00:01:24.137 rmmod: ERROR: Module i40iw is not currently loaded 00:01:24.137 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:24.137 + true 00:01:24.137 + for D in $DRIVERS 00:01:24.137 + sudo modprobe ice 00:01:24.137 + exit 0 00:01:24.147 [Pipeline] } 00:01:24.163 [Pipeline] // withEnv 00:01:24.167 [Pipeline] } 00:01:24.182 [Pipeline] // stage 00:01:24.190 [Pipeline] catchError 00:01:24.192 [Pipeline] { 00:01:24.205 [Pipeline] timeout 00:01:24.205 Timeout set to expire in 40 min 00:01:24.206 [Pipeline] { 00:01:24.219 [Pipeline] stage 00:01:24.220 [Pipeline] { (Tests) 00:01:24.234 [Pipeline] sh 00:01:24.515 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.515 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.515 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.515 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:24.515 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.515 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:24.515 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:24.515 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:24.515 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:24.515 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:24.515 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.515 + source /etc/os-release 00:01:24.515 ++ NAME='Fedora Linux' 00:01:24.515 ++ VERSION='38 (Cloud Edition)' 00:01:24.515 ++ ID=fedora 00:01:24.515 ++ VERSION_ID=38 00:01:24.515 ++ VERSION_CODENAME= 00:01:24.515 ++ PLATFORM_ID=platform:f38 00:01:24.515 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:24.515 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:24.515 ++ LOGO=fedora-logo-icon 00:01:24.515 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:24.515 ++ HOME_URL=https://fedoraproject.org/ 00:01:24.515 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:24.515 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:24.515 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:24.515 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:24.515 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:24.515 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:24.515 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:24.515 ++ SUPPORT_END=2024-05-14 00:01:24.515 ++ VARIANT='Cloud Edition' 00:01:24.515 ++ VARIANT_ID=cloud 00:01:24.515 + uname -a 00:01:24.515 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:24.515 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:27.812 Hugepages 00:01:27.812 node hugesize free / total 00:01:27.812 node0 1048576kB 0 / 0 00:01:27.812 node0 2048kB 0 / 0 00:01:27.812 node1 1048576kB 0 / 0 00:01:27.812 node1 2048kB 0 / 0 00:01:27.812 00:01:27.812 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:27.812 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:27.812 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:27.812 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:27.812 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:27.812 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:27.812 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:27.812 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:27.812 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:28.073 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:28.073 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:28.073 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:28.073 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:28.073 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:28.073 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:28.073 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:28.073 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:28.073 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:28.073 + rm -f /tmp/spdk-ld-path 00:01:28.073 + source autorun-spdk.conf 00:01:28.073 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.073 ++ SPDK_TEST_NVMF=1 00:01:28.073 ++ SPDK_TEST_NVME_CLI=1 00:01:28.073 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.073 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.073 ++ SPDK_RUN_UBSAN=1 00:01:28.073 ++ NET_TYPE=phy 00:01:28.073 ++ RUN_NIGHTLY=1 00:01:28.073 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.073 + [[ -n '' ]] 00:01:28.073 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.073 + for M in /var/spdk/build-*-manifest.txt 00:01:28.073 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.073 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.073 + for M in /var/spdk/build-*-manifest.txt 00:01:28.073 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.073 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.073 ++ uname 00:01:28.073 + [[ Linux == \L\i\n\u\x ]] 00:01:28.073 + sudo dmesg -T 00:01:28.334 + sudo dmesg --clear 00:01:28.334 + dmesg_pid=3869136 00:01:28.334 + [[ Fedora Linux == FreeBSD ]] 00:01:28.334 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.334 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.334 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:28.334 + [[ -x /usr/src/fio-static/fio ]] 00:01:28.334 + export FIO_BIN=/usr/src/fio-static/fio 00:01:28.334 + FIO_BIN=/usr/src/fio-static/fio 00:01:28.334 + sudo dmesg -Tw 00:01:28.334 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:28.334 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:28.334 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:28.334 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.334 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.334 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:28.334 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.334 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.334 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.334 Test configuration: 00:01:28.334 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.334 SPDK_TEST_NVMF=1 00:01:28.334 SPDK_TEST_NVME_CLI=1 00:01:28.334 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.334 SPDK_TEST_NVMF_NICS=e810 00:01:28.334 SPDK_RUN_UBSAN=1 00:01:28.334 NET_TYPE=phy 00:01:28.334 RUN_NIGHTLY=1 19:53:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:28.334 19:53:20 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.334 19:53:20 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.334 19:53:20 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.334 19:53:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.334 19:53:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.334 19:53:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.334 19:53:20 -- paths/export.sh@5 -- $ export PATH 00:01:28.334 19:53:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.334 19:53:20 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:28.334 19:53:20 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:28.334 19:53:20 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715795600.XXXXXX 00:01:28.334 19:53:20 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715795600.cM9EUz 00:01:28.334 19:53:20 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:28.334 19:53:20 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:28.334 19:53:20 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:28.334 19:53:20 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:28.334 19:53:20 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.334 19:53:20 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:28.334 19:53:20 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:28.334 19:53:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.334 19:53:20 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:28.334 19:53:20 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:28.334 19:53:20 -- pm/common@17 -- $ local monitor 00:01:28.334 19:53:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.334 19:53:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.334 19:53:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.334 19:53:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.334 19:53:20 -- pm/common@21 -- $ date +%s 00:01:28.334 19:53:20 -- pm/common@25 -- $ sleep 1 00:01:28.334 19:53:20 -- pm/common@21 -- $ date +%s 00:01:28.334 19:53:20 -- pm/common@21 -- $ date +%s 00:01:28.334 19:53:20 -- pm/common@21 -- $ date +%s 00:01:28.334 19:53:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715795600 00:01:28.334 19:53:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715795600 00:01:28.334 19:53:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715795600 00:01:28.334 19:53:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715795600 00:01:28.334 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715795600_collect-vmstat.pm.log 00:01:28.334 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715795600_collect-cpu-load.pm.log 00:01:28.334 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715795600_collect-cpu-temp.pm.log 00:01:28.334 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715795600_collect-bmc-pm.bmc.pm.log 00:01:29.276 19:53:21 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:29.276 19:53:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.276 19:53:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.276 19:53:21 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.276 19:53:21 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.276 Wed May 15 05:53:21 PM UTC 2024 00:01:29.276 19:53:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.537 v24.05-pre-664-g40b11d962 00:01:29.537 19:53:21 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:29.537 19:53:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:29.537 19:53:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:29.537 19:53:21 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:29.537 19:53:21 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:29.537 19:53:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.537 ************************************ 00:01:29.537 START TEST ubsan 00:01:29.537 ************************************ 00:01:29.537 19:53:21 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:29.537 using ubsan 00:01:29.537 00:01:29.537 real 0m0.000s 00:01:29.537 user 0m0.000s 00:01:29.537 sys 0m0.000s 00:01:29.537 19:53:21 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:29.537 19:53:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.537 ************************************ 00:01:29.537 END TEST ubsan 00:01:29.537 ************************************ 00:01:29.537 19:53:21 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:29.537 19:53:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:29.537 19:53:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:29.537 19:53:21 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:29.537 19:53:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:29.537 19:53:21 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:29.537 19:53:21 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:29.537 19:53:21 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:29.537 19:53:21 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:29.537 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:29.537 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:30.107 Using 'verbs' RDMA provider 00:01:45.592 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:57.838 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:57.838 Creating mk/config.mk...done. 00:01:57.838 Creating mk/cc.flags.mk...done. 00:01:57.838 Type 'make' to build. 00:01:57.838 19:53:50 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:57.838 19:53:50 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:57.838 19:53:50 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:57.838 19:53:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.838 ************************************ 00:01:57.838 START TEST make 00:01:57.838 ************************************ 00:01:57.838 19:53:50 make -- common/autotest_common.sh@1121 -- $ make -j144 00:01:58.100 make[1]: Nothing to be done for 'all'. 00:02:06.229 The Meson build system 00:02:06.229 Version: 1.3.1 00:02:06.229 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:06.229 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:06.229 Build type: native build 00:02:06.229 Program cat found: YES (/usr/bin/cat) 00:02:06.229 Project name: DPDK 00:02:06.229 Project version: 23.11.0 00:02:06.229 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:06.229 C linker for the host machine: cc ld.bfd 2.39-16 00:02:06.229 Host machine cpu family: x86_64 00:02:06.229 Host machine cpu: x86_64 00:02:06.229 Message: ## Building in Developer Mode ## 00:02:06.229 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:06.229 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:06.229 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:06.229 Program python3 found: YES (/usr/bin/python3) 00:02:06.229 Program cat found: YES (/usr/bin/cat) 00:02:06.229 Compiler for C supports arguments -march=native: YES 00:02:06.229 Checking for size of "void *" : 8 00:02:06.229 Checking for size of "void *" : 8 (cached) 00:02:06.229 Library m found: YES 00:02:06.229 Library numa found: YES 00:02:06.229 Has header "numaif.h" : YES 00:02:06.229 Library fdt found: NO 00:02:06.229 Library execinfo found: NO 00:02:06.229 Has header "execinfo.h" : YES 00:02:06.229 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:06.229 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:06.229 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:06.229 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:06.229 Run-time dependency openssl found: YES 3.0.9 00:02:06.229 Run-time dependency libpcap found: YES 1.10.4 00:02:06.229 Has header "pcap.h" with dependency libpcap: YES 00:02:06.229 Compiler for C supports arguments -Wcast-qual: YES 00:02:06.229 Compiler for C supports arguments -Wdeprecated: YES 00:02:06.229 Compiler for C supports arguments -Wformat: YES 00:02:06.229 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:06.229 Compiler for C supports arguments -Wformat-security: NO 00:02:06.229 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:06.229 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:06.229 Compiler for C supports arguments -Wnested-externs: YES 00:02:06.230 Compiler for C supports arguments -Wold-style-definition: YES 00:02:06.230 Compiler for C supports arguments -Wpointer-arith: YES 00:02:06.230 Compiler for C supports arguments -Wsign-compare: YES 00:02:06.230 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:06.230 Compiler for C supports arguments -Wundef: YES 00:02:06.230 Compiler for C supports arguments -Wwrite-strings: YES 00:02:06.230 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:06.230 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:06.230 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:06.230 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:06.230 Program objdump found: YES (/usr/bin/objdump) 00:02:06.230 Compiler for C supports arguments -mavx512f: YES 00:02:06.230 Checking if "AVX512 checking" compiles: YES 00:02:06.230 Fetching value of define "__SSE4_2__" : 1 00:02:06.230 Fetching value of define "__AES__" : 1 00:02:06.230 Fetching value of define "__AVX__" : 1 00:02:06.230 Fetching value of define "__AVX2__" : 1 00:02:06.230 Fetching value of define "__AVX512BW__" : 1 00:02:06.230 Fetching value of define "__AVX512CD__" : 1 00:02:06.230 Fetching value of define "__AVX512DQ__" : 1 00:02:06.230 Fetching value of define "__AVX512F__" : 1 00:02:06.230 Fetching value of define "__AVX512VL__" : 1 00:02:06.230 Fetching value of define "__PCLMUL__" : 1 00:02:06.230 Fetching value of define "__RDRND__" : 1 00:02:06.230 Fetching value of define "__RDSEED__" : 1 00:02:06.230 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:06.230 Fetching value of define "__znver1__" : (undefined) 00:02:06.230 Fetching value of define "__znver2__" : (undefined) 00:02:06.230 Fetching value of define "__znver3__" : (undefined) 00:02:06.230 Fetching value of define "__znver4__" : (undefined) 00:02:06.230 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:06.230 Message: lib/log: Defining dependency "log" 00:02:06.230 Message: lib/kvargs: Defining dependency "kvargs" 00:02:06.230 Message: lib/telemetry: Defining dependency "telemetry" 00:02:06.230 Checking for function "getentropy" : NO 00:02:06.230 Message: lib/eal: Defining dependency "eal" 00:02:06.230 Message: lib/ring: Defining dependency "ring" 00:02:06.230 Message: lib/rcu: Defining dependency "rcu" 00:02:06.230 Message: lib/mempool: Defining dependency "mempool" 00:02:06.230 Message: lib/mbuf: Defining dependency "mbuf" 00:02:06.230 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:06.230 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:06.230 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:06.230 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:06.230 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:06.230 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:06.230 Compiler for C supports arguments -mpclmul: YES 00:02:06.230 Compiler for C supports arguments -maes: YES 00:02:06.230 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:06.230 Compiler for C supports arguments -mavx512bw: YES 00:02:06.230 Compiler for C supports arguments -mavx512dq: YES 00:02:06.230 Compiler for C supports arguments -mavx512vl: YES 00:02:06.230 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:06.230 Compiler for C supports arguments -mavx2: YES 00:02:06.230 Compiler for C supports arguments -mavx: YES 00:02:06.230 Message: lib/net: Defining dependency "net" 00:02:06.230 Message: lib/meter: Defining dependency "meter" 00:02:06.230 Message: lib/ethdev: Defining dependency "ethdev" 00:02:06.230 Message: lib/pci: Defining dependency "pci" 00:02:06.230 Message: lib/cmdline: Defining dependency "cmdline" 00:02:06.230 Message: lib/hash: Defining dependency "hash" 00:02:06.230 Message: lib/timer: Defining dependency "timer" 00:02:06.230 Message: lib/compressdev: Defining dependency "compressdev" 00:02:06.230 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:06.230 Message: lib/dmadev: Defining dependency "dmadev" 00:02:06.230 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:06.230 Message: lib/power: Defining dependency "power" 00:02:06.230 Message: lib/reorder: Defining dependency "reorder" 00:02:06.230 Message: lib/security: Defining dependency "security" 00:02:06.230 Has header "linux/userfaultfd.h" : YES 00:02:06.230 Has header "linux/vduse.h" : YES 00:02:06.230 Message: lib/vhost: Defining dependency "vhost" 00:02:06.230 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:06.230 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:06.230 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:06.230 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:06.230 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:06.230 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:06.230 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:06.230 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:06.230 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:06.230 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:06.230 Program doxygen found: YES (/usr/bin/doxygen) 00:02:06.230 Configuring doxy-api-html.conf using configuration 00:02:06.230 Configuring doxy-api-man.conf using configuration 00:02:06.230 Program mandb found: YES (/usr/bin/mandb) 00:02:06.230 Program sphinx-build found: NO 00:02:06.230 Configuring rte_build_config.h using configuration 00:02:06.230 Message: 00:02:06.230 ================= 00:02:06.230 Applications Enabled 00:02:06.230 ================= 00:02:06.230 00:02:06.230 apps: 00:02:06.230 00:02:06.230 00:02:06.230 Message: 00:02:06.230 ================= 00:02:06.230 Libraries Enabled 00:02:06.230 ================= 00:02:06.230 00:02:06.230 libs: 00:02:06.230 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:06.230 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:06.230 cryptodev, dmadev, power, reorder, security, vhost, 00:02:06.230 00:02:06.230 Message: 00:02:06.230 =============== 00:02:06.230 Drivers Enabled 00:02:06.230 =============== 00:02:06.230 00:02:06.230 common: 00:02:06.230 00:02:06.230 bus: 00:02:06.230 pci, vdev, 00:02:06.230 mempool: 00:02:06.230 ring, 00:02:06.230 dma: 00:02:06.230 00:02:06.230 net: 00:02:06.230 00:02:06.230 crypto: 00:02:06.230 00:02:06.230 compress: 00:02:06.230 00:02:06.230 vdpa: 00:02:06.230 00:02:06.230 00:02:06.230 Message: 00:02:06.230 ================= 00:02:06.230 Content Skipped 00:02:06.230 ================= 00:02:06.230 00:02:06.230 apps: 00:02:06.230 dumpcap: explicitly disabled via build config 00:02:06.230 graph: explicitly disabled via build config 00:02:06.230 pdump: explicitly disabled via build config 00:02:06.230 proc-info: explicitly disabled via build config 00:02:06.230 test-acl: explicitly disabled via build config 00:02:06.230 test-bbdev: explicitly disabled via build config 00:02:06.230 test-cmdline: explicitly disabled via build config 00:02:06.230 test-compress-perf: explicitly disabled via build config 00:02:06.230 test-crypto-perf: explicitly disabled via build config 00:02:06.230 test-dma-perf: explicitly disabled via build config 00:02:06.230 test-eventdev: explicitly disabled via build config 00:02:06.230 test-fib: explicitly disabled via build config 00:02:06.230 test-flow-perf: explicitly disabled via build config 00:02:06.230 test-gpudev: explicitly disabled via build config 00:02:06.230 test-mldev: explicitly disabled via build config 00:02:06.230 test-pipeline: explicitly disabled via build config 00:02:06.230 test-pmd: explicitly disabled via build config 00:02:06.230 test-regex: explicitly disabled via build config 00:02:06.230 test-sad: explicitly disabled via build config 00:02:06.230 test-security-perf: explicitly disabled via build config 00:02:06.230 00:02:06.230 libs: 00:02:06.230 metrics: explicitly disabled via build config 00:02:06.230 acl: explicitly disabled via build config 00:02:06.230 bbdev: explicitly disabled via build config 00:02:06.230 bitratestats: explicitly disabled via build config 00:02:06.230 bpf: explicitly disabled via build config 00:02:06.230 cfgfile: explicitly disabled via build config 00:02:06.230 distributor: explicitly disabled via build config 00:02:06.230 efd: explicitly disabled via build config 00:02:06.230 eventdev: explicitly disabled via build config 00:02:06.230 dispatcher: explicitly disabled via build config 00:02:06.230 gpudev: explicitly disabled via build config 00:02:06.230 gro: explicitly disabled via build config 00:02:06.230 gso: explicitly disabled via build config 00:02:06.230 ip_frag: explicitly disabled via build config 00:02:06.230 jobstats: explicitly disabled via build config 00:02:06.230 latencystats: explicitly disabled via build config 00:02:06.230 lpm: explicitly disabled via build config 00:02:06.230 member: explicitly disabled via build config 00:02:06.230 pcapng: explicitly disabled via build config 00:02:06.230 rawdev: explicitly disabled via build config 00:02:06.230 regexdev: explicitly disabled via build config 00:02:06.230 mldev: explicitly disabled via build config 00:02:06.230 rib: explicitly disabled via build config 00:02:06.230 sched: explicitly disabled via build config 00:02:06.230 stack: explicitly disabled via build config 00:02:06.230 ipsec: explicitly disabled via build config 00:02:06.230 pdcp: explicitly disabled via build config 00:02:06.230 fib: explicitly disabled via build config 00:02:06.230 port: explicitly disabled via build config 00:02:06.230 pdump: explicitly disabled via build config 00:02:06.230 table: explicitly disabled via build config 00:02:06.230 pipeline: explicitly disabled via build config 00:02:06.230 graph: explicitly disabled via build config 00:02:06.230 node: explicitly disabled via build config 00:02:06.230 00:02:06.230 drivers: 00:02:06.230 common/cpt: not in enabled drivers build config 00:02:06.230 common/dpaax: not in enabled drivers build config 00:02:06.230 common/iavf: not in enabled drivers build config 00:02:06.230 common/idpf: not in enabled drivers build config 00:02:06.230 common/mvep: not in enabled drivers build config 00:02:06.230 common/octeontx: not in enabled drivers build config 00:02:06.230 bus/auxiliary: not in enabled drivers build config 00:02:06.230 bus/cdx: not in enabled drivers build config 00:02:06.230 bus/dpaa: not in enabled drivers build config 00:02:06.230 bus/fslmc: not in enabled drivers build config 00:02:06.230 bus/ifpga: not in enabled drivers build config 00:02:06.230 bus/platform: not in enabled drivers build config 00:02:06.230 bus/vmbus: not in enabled drivers build config 00:02:06.230 common/cnxk: not in enabled drivers build config 00:02:06.230 common/mlx5: not in enabled drivers build config 00:02:06.230 common/nfp: not in enabled drivers build config 00:02:06.230 common/qat: not in enabled drivers build config 00:02:06.231 common/sfc_efx: not in enabled drivers build config 00:02:06.231 mempool/bucket: not in enabled drivers build config 00:02:06.231 mempool/cnxk: not in enabled drivers build config 00:02:06.231 mempool/dpaa: not in enabled drivers build config 00:02:06.231 mempool/dpaa2: not in enabled drivers build config 00:02:06.231 mempool/octeontx: not in enabled drivers build config 00:02:06.231 mempool/stack: not in enabled drivers build config 00:02:06.231 dma/cnxk: not in enabled drivers build config 00:02:06.231 dma/dpaa: not in enabled drivers build config 00:02:06.231 dma/dpaa2: not in enabled drivers build config 00:02:06.231 dma/hisilicon: not in enabled drivers build config 00:02:06.231 dma/idxd: not in enabled drivers build config 00:02:06.231 dma/ioat: not in enabled drivers build config 00:02:06.231 dma/skeleton: not in enabled drivers build config 00:02:06.231 net/af_packet: not in enabled drivers build config 00:02:06.231 net/af_xdp: not in enabled drivers build config 00:02:06.231 net/ark: not in enabled drivers build config 00:02:06.231 net/atlantic: not in enabled drivers build config 00:02:06.231 net/avp: not in enabled drivers build config 00:02:06.231 net/axgbe: not in enabled drivers build config 00:02:06.231 net/bnx2x: not in enabled drivers build config 00:02:06.231 net/bnxt: not in enabled drivers build config 00:02:06.231 net/bonding: not in enabled drivers build config 00:02:06.231 net/cnxk: not in enabled drivers build config 00:02:06.231 net/cpfl: not in enabled drivers build config 00:02:06.231 net/cxgbe: not in enabled drivers build config 00:02:06.231 net/dpaa: not in enabled drivers build config 00:02:06.231 net/dpaa2: not in enabled drivers build config 00:02:06.231 net/e1000: not in enabled drivers build config 00:02:06.231 net/ena: not in enabled drivers build config 00:02:06.231 net/enetc: not in enabled drivers build config 00:02:06.231 net/enetfec: not in enabled drivers build config 00:02:06.231 net/enic: not in enabled drivers build config 00:02:06.231 net/failsafe: not in enabled drivers build config 00:02:06.231 net/fm10k: not in enabled drivers build config 00:02:06.231 net/gve: not in enabled drivers build config 00:02:06.231 net/hinic: not in enabled drivers build config 00:02:06.231 net/hns3: not in enabled drivers build config 00:02:06.231 net/i40e: not in enabled drivers build config 00:02:06.231 net/iavf: not in enabled drivers build config 00:02:06.231 net/ice: not in enabled drivers build config 00:02:06.231 net/idpf: not in enabled drivers build config 00:02:06.231 net/igc: not in enabled drivers build config 00:02:06.231 net/ionic: not in enabled drivers build config 00:02:06.231 net/ipn3ke: not in enabled drivers build config 00:02:06.231 net/ixgbe: not in enabled drivers build config 00:02:06.231 net/mana: not in enabled drivers build config 00:02:06.231 net/memif: not in enabled drivers build config 00:02:06.231 net/mlx4: not in enabled drivers build config 00:02:06.231 net/mlx5: not in enabled drivers build config 00:02:06.231 net/mvneta: not in enabled drivers build config 00:02:06.231 net/mvpp2: not in enabled drivers build config 00:02:06.231 net/netvsc: not in enabled drivers build config 00:02:06.231 net/nfb: not in enabled drivers build config 00:02:06.231 net/nfp: not in enabled drivers build config 00:02:06.231 net/ngbe: not in enabled drivers build config 00:02:06.231 net/null: not in enabled drivers build config 00:02:06.231 net/octeontx: not in enabled drivers build config 00:02:06.231 net/octeon_ep: not in enabled drivers build config 00:02:06.231 net/pcap: not in enabled drivers build config 00:02:06.231 net/pfe: not in enabled drivers build config 00:02:06.231 net/qede: not in enabled drivers build config 00:02:06.231 net/ring: not in enabled drivers build config 00:02:06.231 net/sfc: not in enabled drivers build config 00:02:06.231 net/softnic: not in enabled drivers build config 00:02:06.231 net/tap: not in enabled drivers build config 00:02:06.231 net/thunderx: not in enabled drivers build config 00:02:06.231 net/txgbe: not in enabled drivers build config 00:02:06.231 net/vdev_netvsc: not in enabled drivers build config 00:02:06.231 net/vhost: not in enabled drivers build config 00:02:06.231 net/virtio: not in enabled drivers build config 00:02:06.231 net/vmxnet3: not in enabled drivers build config 00:02:06.231 raw/*: missing internal dependency, "rawdev" 00:02:06.231 crypto/armv8: not in enabled drivers build config 00:02:06.231 crypto/bcmfs: not in enabled drivers build config 00:02:06.231 crypto/caam_jr: not in enabled drivers build config 00:02:06.231 crypto/ccp: not in enabled drivers build config 00:02:06.231 crypto/cnxk: not in enabled drivers build config 00:02:06.231 crypto/dpaa_sec: not in enabled drivers build config 00:02:06.231 crypto/dpaa2_sec: not in enabled drivers build config 00:02:06.231 crypto/ipsec_mb: not in enabled drivers build config 00:02:06.231 crypto/mlx5: not in enabled drivers build config 00:02:06.231 crypto/mvsam: not in enabled drivers build config 00:02:06.231 crypto/nitrox: not in enabled drivers build config 00:02:06.231 crypto/null: not in enabled drivers build config 00:02:06.231 crypto/octeontx: not in enabled drivers build config 00:02:06.231 crypto/openssl: not in enabled drivers build config 00:02:06.231 crypto/scheduler: not in enabled drivers build config 00:02:06.231 crypto/uadk: not in enabled drivers build config 00:02:06.231 crypto/virtio: not in enabled drivers build config 00:02:06.231 compress/isal: not in enabled drivers build config 00:02:06.231 compress/mlx5: not in enabled drivers build config 00:02:06.231 compress/octeontx: not in enabled drivers build config 00:02:06.231 compress/zlib: not in enabled drivers build config 00:02:06.231 regex/*: missing internal dependency, "regexdev" 00:02:06.231 ml/*: missing internal dependency, "mldev" 00:02:06.231 vdpa/ifc: not in enabled drivers build config 00:02:06.231 vdpa/mlx5: not in enabled drivers build config 00:02:06.231 vdpa/nfp: not in enabled drivers build config 00:02:06.231 vdpa/sfc: not in enabled drivers build config 00:02:06.231 event/*: missing internal dependency, "eventdev" 00:02:06.231 baseband/*: missing internal dependency, "bbdev" 00:02:06.231 gpu/*: missing internal dependency, "gpudev" 00:02:06.231 00:02:06.231 00:02:06.802 Build targets in project: 84 00:02:06.802 00:02:06.802 DPDK 23.11.0 00:02:06.802 00:02:06.802 User defined options 00:02:06.802 buildtype : debug 00:02:06.802 default_library : shared 00:02:06.802 libdir : lib 00:02:06.802 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:06.802 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:06.802 c_link_args : 00:02:06.802 cpu_instruction_set: native 00:02:06.802 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:02:06.802 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:02:06.802 enable_docs : false 00:02:06.802 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:06.802 enable_kmods : false 00:02:06.802 tests : false 00:02:06.802 00:02:06.802 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.061 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:07.325 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:07.325 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:07.325 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:07.325 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:07.325 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:07.325 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:07.325 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:07.325 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:07.325 [9/264] Linking static target lib/librte_kvargs.a 00:02:07.325 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:07.325 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:07.325 [12/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:07.325 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:07.325 [14/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:07.325 [15/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:07.325 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:07.325 [17/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:07.325 [18/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:07.325 [19/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:07.325 [20/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:07.325 [21/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:07.325 [22/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:07.325 [23/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:07.325 [24/264] Linking static target lib/librte_log.a 00:02:07.325 [25/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:07.325 [26/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:07.325 [27/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:07.325 [28/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:07.325 [29/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:07.325 [30/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:07.584 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:07.584 [32/264] Linking static target lib/librte_pci.a 00:02:07.584 [33/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:07.584 [34/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:07.584 [35/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:07.584 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:07.584 [37/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:07.584 [38/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:07.584 [39/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:07.584 [40/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:07.584 [41/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:07.584 [42/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:07.584 [43/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:07.584 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:07.584 [45/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:07.584 [46/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.584 [47/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.584 [48/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:07.844 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:07.844 [50/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.844 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:07.844 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:07.844 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:07.844 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:07.844 [55/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:07.844 [56/264] Linking static target lib/librte_ring.a 00:02:07.844 [57/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:07.844 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:07.844 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:07.844 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:07.844 [61/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.844 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:07.844 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:07.844 [64/264] Linking static target lib/librte_meter.a 00:02:07.844 [65/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:07.844 [66/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.844 [67/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:07.844 [68/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:07.844 [69/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:07.844 [70/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:07.844 [71/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:07.844 [72/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:07.844 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:07.844 [74/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:07.844 [75/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:07.844 [76/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:07.844 [77/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:07.844 [78/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:07.844 [79/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:07.844 [80/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:07.844 [81/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:07.844 [82/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:07.844 [83/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:07.844 [84/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.844 [85/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:07.844 [86/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:07.844 [87/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:07.844 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:07.844 [89/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:07.844 [90/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:07.844 [91/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:07.844 [92/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:07.844 [93/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:07.844 [94/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:07.844 [95/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:07.844 [96/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:07.844 [97/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:07.844 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:07.844 [99/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:07.844 [100/264] Linking static target lib/librte_telemetry.a 00:02:07.844 [101/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:07.844 [102/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:07.844 [103/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:07.844 [104/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:07.844 [105/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:07.844 [106/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:07.844 [107/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:07.844 [108/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:07.844 [109/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.844 [110/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.844 [111/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:07.844 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.844 [113/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.844 [114/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.844 [115/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:07.844 [116/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:07.844 [117/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.844 [118/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:07.844 [119/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:07.844 [120/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:07.844 [121/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:07.844 [122/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:07.844 [123/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:07.844 [124/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.844 [125/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:07.844 [126/264] Linking static target lib/librte_cmdline.a 00:02:07.844 [127/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.844 [128/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:07.844 [129/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:07.844 [130/264] Linking static target lib/librte_security.a 00:02:07.844 [131/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:07.844 [132/264] Linking static target lib/librte_timer.a 00:02:07.844 [133/264] Linking static target lib/librte_dmadev.a 00:02:07.844 [134/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.844 [135/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:08.105 [136/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:08.105 [137/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:08.105 [138/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:08.105 [139/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:08.105 [140/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:08.105 [141/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:08.105 [142/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:08.105 [143/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:08.105 [144/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:08.105 [145/264] Linking static target lib/librte_mempool.a 00:02:08.105 [146/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:08.105 [147/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.105 [148/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:08.105 [149/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:08.105 [150/264] Linking static target lib/librte_rcu.a 00:02:08.105 [151/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:08.105 [152/264] Linking static target lib/librte_net.a 00:02:08.105 [153/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:08.105 [154/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:08.105 [155/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:08.105 [156/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.105 [157/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.105 [158/264] Linking static target lib/librte_reorder.a 00:02:08.105 [159/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:08.105 [160/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:08.105 [161/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:08.105 [162/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:08.105 [163/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:08.105 [164/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:08.105 [165/264] Linking target lib/librte_log.so.24.0 00:02:08.105 [166/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.105 [167/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.105 [168/264] Linking static target lib/librte_eal.a 00:02:08.105 [169/264] Linking static target lib/librte_power.a 00:02:08.105 [170/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:08.105 [171/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:08.105 [172/264] Linking static target lib/librte_compressdev.a 00:02:08.105 [173/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:08.105 [174/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:08.105 [175/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:08.105 [176/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:08.105 [177/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:08.105 [178/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:08.105 [179/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:08.105 [180/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:08.105 [181/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:08.105 [182/264] Linking static target drivers/librte_bus_vdev.a 00:02:08.105 [183/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:08.105 [184/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:08.105 [185/264] Linking static target lib/librte_mbuf.a 00:02:08.105 [186/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:08.105 [187/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:08.105 [188/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:08.367 [189/264] Linking target lib/librte_kvargs.so.24.0 00:02:08.367 [190/264] Linking static target lib/librte_hash.a 00:02:08.367 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:08.367 [192/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:08.367 [193/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:08.367 [194/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.367 [195/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.367 [196/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:08.367 [197/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.367 [198/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:08.367 [199/264] Linking static target drivers/librte_mempool_ring.a 00:02:08.367 [200/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:08.367 [201/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:08.367 [202/264] Linking static target drivers/librte_bus_pci.a 00:02:08.367 [203/264] Linking static target lib/librte_cryptodev.a 00:02:08.367 [204/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:08.367 [205/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.367 [206/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.367 [207/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.628 [208/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.628 [209/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.628 [210/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.628 [211/264] Linking target lib/librte_telemetry.so.24.0 00:02:08.628 [212/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.628 [213/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:08.888 [214/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:08.888 [215/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.888 [216/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:08.888 [217/264] Linking static target lib/librte_ethdev.a 00:02:08.888 [218/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.148 [219/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.148 [220/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.148 [221/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.148 [222/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.409 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.981 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:09.981 [225/264] Linking static target lib/librte_vhost.a 00:02:10.572 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.493 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.089 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.663 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.930 [230/264] Linking target lib/librte_eal.so.24.0 00:02:19.930 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:19.930 [232/264] Linking target lib/librte_dmadev.so.24.0 00:02:19.930 [233/264] Linking target lib/librte_ring.so.24.0 00:02:19.930 [234/264] Linking target lib/librte_meter.so.24.0 00:02:19.930 [235/264] Linking target lib/librte_timer.so.24.0 00:02:19.930 [236/264] Linking target lib/librte_pci.so.24.0 00:02:19.931 [237/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:20.215 [238/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:20.215 [239/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:20.215 [240/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:20.215 [241/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:20.215 [242/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:20.215 [243/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:20.215 [244/264] Linking target lib/librte_rcu.so.24.0 00:02:20.215 [245/264] Linking target lib/librte_mempool.so.24.0 00:02:20.215 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:20.215 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:20.516 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:20.516 [249/264] Linking target lib/librte_mbuf.so.24.0 00:02:20.516 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:20.516 [251/264] Linking target lib/librte_cryptodev.so.24.0 00:02:20.517 [252/264] Linking target lib/librte_compressdev.so.24.0 00:02:20.517 [253/264] Linking target lib/librte_reorder.so.24.0 00:02:20.517 [254/264] Linking target lib/librte_net.so.24.0 00:02:20.778 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:20.778 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:20.778 [257/264] Linking target lib/librte_hash.so.24.0 00:02:20.778 [258/264] Linking target lib/librte_security.so.24.0 00:02:20.778 [259/264] Linking target lib/librte_cmdline.so.24.0 00:02:20.778 [260/264] Linking target lib/librte_ethdev.so.24.0 00:02:21.039 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:21.039 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:21.039 [263/264] Linking target lib/librte_power.so.24.0 00:02:21.039 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:21.039 INFO: autodetecting backend as ninja 00:02:21.039 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:22.428 CC lib/ut/ut.o 00:02:22.428 CC lib/log/log.o 00:02:22.428 CC lib/log/log_flags.o 00:02:22.428 CC lib/log/log_deprecated.o 00:02:22.428 CC lib/ut_mock/mock.o 00:02:22.428 LIB libspdk_ut_mock.a 00:02:22.428 LIB libspdk_ut.a 00:02:22.428 LIB libspdk_log.a 00:02:22.428 SO libspdk_ut_mock.so.6.0 00:02:22.428 SO libspdk_ut.so.2.0 00:02:22.428 SO libspdk_log.so.7.0 00:02:22.428 SYMLINK libspdk_ut_mock.so 00:02:22.428 SYMLINK libspdk_ut.so 00:02:22.428 SYMLINK libspdk_log.so 00:02:23.001 CC lib/util/base64.o 00:02:23.001 CC lib/util/bit_array.o 00:02:23.001 CC lib/util/cpuset.o 00:02:23.001 CC lib/util/crc16.o 00:02:23.001 CC lib/util/crc32.o 00:02:23.001 CC lib/util/crc32c.o 00:02:23.001 CC lib/util/crc32_ieee.o 00:02:23.001 CC lib/util/crc64.o 00:02:23.001 CC lib/util/dif.o 00:02:23.001 CXX lib/trace_parser/trace.o 00:02:23.001 CC lib/util/fd.o 00:02:23.001 CC lib/util/file.o 00:02:23.001 CC lib/util/hexlify.o 00:02:23.001 CC lib/util/iov.o 00:02:23.001 CC lib/util/math.o 00:02:23.001 CC lib/dma/dma.o 00:02:23.001 CC lib/util/pipe.o 00:02:23.001 CC lib/util/string.o 00:02:23.001 CC lib/util/strerror_tls.o 00:02:23.001 CC lib/util/uuid.o 00:02:23.001 CC lib/util/fd_group.o 00:02:23.001 CC lib/ioat/ioat.o 00:02:23.001 CC lib/util/xor.o 00:02:23.001 CC lib/util/zipf.o 00:02:23.001 CC lib/vfio_user/host/vfio_user_pci.o 00:02:23.001 CC lib/vfio_user/host/vfio_user.o 00:02:23.263 LIB libspdk_dma.a 00:02:23.263 SO libspdk_dma.so.4.0 00:02:23.263 LIB libspdk_ioat.a 00:02:23.263 SO libspdk_ioat.so.7.0 00:02:23.263 SYMLINK libspdk_dma.so 00:02:23.263 LIB libspdk_vfio_user.a 00:02:23.263 SYMLINK libspdk_ioat.so 00:02:23.263 SO libspdk_vfio_user.so.5.0 00:02:23.263 LIB libspdk_util.a 00:02:23.263 SYMLINK libspdk_vfio_user.so 00:02:23.525 SO libspdk_util.so.9.0 00:02:23.525 SYMLINK libspdk_util.so 00:02:23.786 LIB libspdk_trace_parser.a 00:02:23.786 SO libspdk_trace_parser.so.5.0 00:02:23.786 SYMLINK libspdk_trace_parser.so 00:02:24.046 CC lib/rdma/common.o 00:02:24.046 CC lib/rdma/rdma_verbs.o 00:02:24.046 CC lib/idxd/idxd.o 00:02:24.046 CC lib/idxd/idxd_user.o 00:02:24.047 CC lib/conf/conf.o 00:02:24.047 CC lib/json/json_parse.o 00:02:24.047 CC lib/json/json_util.o 00:02:24.047 CC lib/json/json_write.o 00:02:24.047 CC lib/env_dpdk/env.o 00:02:24.047 CC lib/vmd/vmd.o 00:02:24.047 CC lib/env_dpdk/memory.o 00:02:24.047 CC lib/vmd/led.o 00:02:24.047 CC lib/env_dpdk/pci.o 00:02:24.047 CC lib/env_dpdk/init.o 00:02:24.047 CC lib/env_dpdk/threads.o 00:02:24.047 CC lib/env_dpdk/pci_ioat.o 00:02:24.047 CC lib/env_dpdk/pci_virtio.o 00:02:24.047 CC lib/env_dpdk/pci_vmd.o 00:02:24.047 CC lib/env_dpdk/pci_idxd.o 00:02:24.047 CC lib/env_dpdk/pci_event.o 00:02:24.047 CC lib/env_dpdk/sigbus_handler.o 00:02:24.047 CC lib/env_dpdk/pci_dpdk.o 00:02:24.047 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:24.047 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:24.309 LIB libspdk_conf.a 00:02:24.309 LIB libspdk_rdma.a 00:02:24.309 SO libspdk_conf.so.6.0 00:02:24.309 SO libspdk_rdma.so.6.0 00:02:24.309 LIB libspdk_json.a 00:02:24.309 SYMLINK libspdk_conf.so 00:02:24.309 SO libspdk_json.so.6.0 00:02:24.309 SYMLINK libspdk_rdma.so 00:02:24.309 SYMLINK libspdk_json.so 00:02:24.309 LIB libspdk_idxd.a 00:02:24.571 SO libspdk_idxd.so.12.0 00:02:24.571 LIB libspdk_vmd.a 00:02:24.571 SYMLINK libspdk_idxd.so 00:02:24.571 SO libspdk_vmd.so.6.0 00:02:24.571 SYMLINK libspdk_vmd.so 00:02:24.832 CC lib/jsonrpc/jsonrpc_server.o 00:02:24.832 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:24.832 CC lib/jsonrpc/jsonrpc_client.o 00:02:24.832 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:25.093 LIB libspdk_jsonrpc.a 00:02:25.093 SO libspdk_jsonrpc.so.6.0 00:02:25.093 SYMLINK libspdk_jsonrpc.so 00:02:25.093 LIB libspdk_env_dpdk.a 00:02:25.354 SO libspdk_env_dpdk.so.14.0 00:02:25.354 SYMLINK libspdk_env_dpdk.so 00:02:25.354 CC lib/rpc/rpc.o 00:02:25.615 LIB libspdk_rpc.a 00:02:25.615 SO libspdk_rpc.so.6.0 00:02:25.877 SYMLINK libspdk_rpc.so 00:02:26.137 CC lib/keyring/keyring.o 00:02:26.137 CC lib/notify/notify.o 00:02:26.137 CC lib/keyring/keyring_rpc.o 00:02:26.137 CC lib/notify/notify_rpc.o 00:02:26.137 CC lib/trace/trace.o 00:02:26.137 CC lib/trace/trace_flags.o 00:02:26.137 CC lib/trace/trace_rpc.o 00:02:26.398 LIB libspdk_notify.a 00:02:26.398 LIB libspdk_keyring.a 00:02:26.398 SO libspdk_notify.so.6.0 00:02:26.398 SO libspdk_keyring.so.1.0 00:02:26.398 LIB libspdk_trace.a 00:02:26.398 SYMLINK libspdk_notify.so 00:02:26.398 SO libspdk_trace.so.10.0 00:02:26.398 SYMLINK libspdk_keyring.so 00:02:26.399 SYMLINK libspdk_trace.so 00:02:27.024 CC lib/sock/sock.o 00:02:27.024 CC lib/thread/thread.o 00:02:27.024 CC lib/sock/sock_rpc.o 00:02:27.024 CC lib/thread/iobuf.o 00:02:27.286 LIB libspdk_sock.a 00:02:27.286 SO libspdk_sock.so.9.0 00:02:27.286 SYMLINK libspdk_sock.so 00:02:27.859 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:27.859 CC lib/nvme/nvme_ctrlr.o 00:02:27.859 CC lib/nvme/nvme_fabric.o 00:02:27.859 CC lib/nvme/nvme_ns_cmd.o 00:02:27.859 CC lib/nvme/nvme_ns.o 00:02:27.859 CC lib/nvme/nvme_pcie_common.o 00:02:27.859 CC lib/nvme/nvme_pcie.o 00:02:27.859 CC lib/nvme/nvme_qpair.o 00:02:27.859 CC lib/nvme/nvme.o 00:02:27.859 CC lib/nvme/nvme_quirks.o 00:02:27.859 CC lib/nvme/nvme_transport.o 00:02:27.859 CC lib/nvme/nvme_discovery.o 00:02:27.859 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:27.859 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:27.859 CC lib/nvme/nvme_tcp.o 00:02:27.859 CC lib/nvme/nvme_opal.o 00:02:27.859 CC lib/nvme/nvme_io_msg.o 00:02:27.859 CC lib/nvme/nvme_poll_group.o 00:02:27.859 CC lib/nvme/nvme_zns.o 00:02:27.859 CC lib/nvme/nvme_stubs.o 00:02:27.859 CC lib/nvme/nvme_auth.o 00:02:27.859 CC lib/nvme/nvme_cuse.o 00:02:27.859 CC lib/nvme/nvme_rdma.o 00:02:28.120 LIB libspdk_thread.a 00:02:28.120 SO libspdk_thread.so.10.0 00:02:28.120 SYMLINK libspdk_thread.so 00:02:28.692 CC lib/accel/accel.o 00:02:28.692 CC lib/accel/accel_sw.o 00:02:28.692 CC lib/virtio/virtio.o 00:02:28.692 CC lib/accel/accel_rpc.o 00:02:28.692 CC lib/init/json_config.o 00:02:28.692 CC lib/virtio/virtio_vhost_user.o 00:02:28.692 CC lib/virtio/virtio_pci.o 00:02:28.692 CC lib/init/subsystem.o 00:02:28.692 CC lib/init/subsystem_rpc.o 00:02:28.692 CC lib/virtio/virtio_vfio_user.o 00:02:28.692 CC lib/blob/request.o 00:02:28.692 CC lib/blob/blobstore.o 00:02:28.692 CC lib/init/rpc.o 00:02:28.692 CC lib/blob/zeroes.o 00:02:28.692 CC lib/blob/blob_bs_dev.o 00:02:28.692 LIB libspdk_init.a 00:02:28.954 SO libspdk_init.so.5.0 00:02:28.954 LIB libspdk_virtio.a 00:02:28.954 SYMLINK libspdk_init.so 00:02:28.954 SO libspdk_virtio.so.7.0 00:02:28.954 SYMLINK libspdk_virtio.so 00:02:29.215 CC lib/event/app.o 00:02:29.215 CC lib/event/reactor.o 00:02:29.215 CC lib/event/log_rpc.o 00:02:29.215 CC lib/event/app_rpc.o 00:02:29.215 CC lib/event/scheduler_static.o 00:02:29.476 LIB libspdk_accel.a 00:02:29.476 LIB libspdk_nvme.a 00:02:29.476 SO libspdk_accel.so.15.0 00:02:29.476 SYMLINK libspdk_accel.so 00:02:29.476 SO libspdk_nvme.so.13.0 00:02:29.738 LIB libspdk_event.a 00:02:29.738 SO libspdk_event.so.13.0 00:02:29.738 SYMLINK libspdk_event.so 00:02:29.999 SYMLINK libspdk_nvme.so 00:02:29.999 CC lib/bdev/bdev.o 00:02:29.999 CC lib/bdev/bdev_rpc.o 00:02:29.999 CC lib/bdev/bdev_zone.o 00:02:29.999 CC lib/bdev/part.o 00:02:29.999 CC lib/bdev/scsi_nvme.o 00:02:30.942 LIB libspdk_blob.a 00:02:31.203 SO libspdk_blob.so.11.0 00:02:31.203 SYMLINK libspdk_blob.so 00:02:31.465 CC lib/lvol/lvol.o 00:02:31.465 CC lib/blobfs/blobfs.o 00:02:31.465 CC lib/blobfs/tree.o 00:02:32.038 LIB libspdk_bdev.a 00:02:32.038 SO libspdk_bdev.so.15.0 00:02:32.299 LIB libspdk_blobfs.a 00:02:32.299 SYMLINK libspdk_bdev.so 00:02:32.299 SO libspdk_blobfs.so.10.0 00:02:32.299 LIB libspdk_lvol.a 00:02:32.299 SO libspdk_lvol.so.10.0 00:02:32.299 SYMLINK libspdk_blobfs.so 00:02:32.560 SYMLINK libspdk_lvol.so 00:02:32.560 CC lib/scsi/dev.o 00:02:32.560 CC lib/scsi/lun.o 00:02:32.560 CC lib/scsi/port.o 00:02:32.560 CC lib/scsi/scsi.o 00:02:32.560 CC lib/scsi/scsi_bdev.o 00:02:32.560 CC lib/scsi/scsi_pr.o 00:02:32.560 CC lib/scsi/scsi_rpc.o 00:02:32.560 CC lib/scsi/task.o 00:02:32.560 CC lib/ublk/ublk.o 00:02:32.560 CC lib/ublk/ublk_rpc.o 00:02:32.560 CC lib/nvmf/ctrlr.o 00:02:32.560 CC lib/nvmf/ctrlr_discovery.o 00:02:32.560 CC lib/nbd/nbd.o 00:02:32.560 CC lib/nvmf/ctrlr_bdev.o 00:02:32.560 CC lib/nvmf/subsystem.o 00:02:32.560 CC lib/ftl/ftl_core.o 00:02:32.560 CC lib/nbd/nbd_rpc.o 00:02:32.560 CC lib/ftl/ftl_init.o 00:02:32.560 CC lib/nvmf/nvmf_rpc.o 00:02:32.560 CC lib/ftl/ftl_layout.o 00:02:32.560 CC lib/nvmf/nvmf.o 00:02:32.560 CC lib/nvmf/transport.o 00:02:32.560 CC lib/ftl/ftl_debug.o 00:02:32.560 CC lib/ftl/ftl_io.o 00:02:32.560 CC lib/nvmf/tcp.o 00:02:32.560 CC lib/nvmf/stubs.o 00:02:32.560 CC lib/ftl/ftl_sb.o 00:02:32.560 CC lib/ftl/ftl_l2p.o 00:02:32.560 CC lib/nvmf/mdns_server.o 00:02:32.560 CC lib/nvmf/rdma.o 00:02:32.560 CC lib/ftl/ftl_l2p_flat.o 00:02:32.560 CC lib/nvmf/auth.o 00:02:32.560 CC lib/ftl/ftl_nv_cache.o 00:02:32.560 CC lib/ftl/ftl_band.o 00:02:32.560 CC lib/ftl/ftl_writer.o 00:02:32.560 CC lib/ftl/ftl_band_ops.o 00:02:32.560 CC lib/ftl/ftl_rq.o 00:02:32.560 CC lib/ftl/ftl_reloc.o 00:02:32.560 CC lib/ftl/ftl_l2p_cache.o 00:02:32.560 CC lib/ftl/ftl_p2l.o 00:02:32.560 CC lib/ftl/mngt/ftl_mngt.o 00:02:32.560 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:32.560 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:32.560 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:32.560 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:32.560 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:32.560 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:32.560 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:32.560 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:32.560 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:32.560 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:32.560 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:32.560 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:32.560 CC lib/ftl/utils/ftl_conf.o 00:02:32.560 CC lib/ftl/utils/ftl_md.o 00:02:32.560 CC lib/ftl/utils/ftl_mempool.o 00:02:32.560 CC lib/ftl/utils/ftl_bitmap.o 00:02:32.560 CC lib/ftl/utils/ftl_property.o 00:02:32.560 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:32.560 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:32.560 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:32.819 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:32.819 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:32.819 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:32.819 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:32.819 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:32.819 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:32.819 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:32.819 CC lib/ftl/base/ftl_base_dev.o 00:02:32.819 CC lib/ftl/ftl_trace.o 00:02:32.819 CC lib/ftl/base/ftl_base_bdev.o 00:02:33.079 LIB libspdk_nbd.a 00:02:33.079 SO libspdk_nbd.so.7.0 00:02:33.079 LIB libspdk_scsi.a 00:02:33.340 SO libspdk_scsi.so.9.0 00:02:33.340 SYMLINK libspdk_nbd.so 00:02:33.340 LIB libspdk_ublk.a 00:02:33.340 SYMLINK libspdk_scsi.so 00:02:33.340 SO libspdk_ublk.so.3.0 00:02:33.340 SYMLINK libspdk_ublk.so 00:02:33.601 LIB libspdk_ftl.a 00:02:33.601 SO libspdk_ftl.so.9.0 00:02:33.601 CC lib/iscsi/conn.o 00:02:33.601 CC lib/iscsi/iscsi.o 00:02:33.601 CC lib/iscsi/init_grp.o 00:02:33.601 CC lib/vhost/vhost_rpc.o 00:02:33.601 CC lib/vhost/vhost.o 00:02:33.601 CC lib/iscsi/md5.o 00:02:33.601 CC lib/iscsi/param.o 00:02:33.601 CC lib/iscsi/portal_grp.o 00:02:33.601 CC lib/vhost/vhost_scsi.o 00:02:33.601 CC lib/iscsi/tgt_node.o 00:02:33.601 CC lib/vhost/vhost_blk.o 00:02:33.601 CC lib/iscsi/iscsi_subsystem.o 00:02:33.601 CC lib/vhost/rte_vhost_user.o 00:02:33.601 CC lib/iscsi/iscsi_rpc.o 00:02:33.601 CC lib/iscsi/task.o 00:02:34.173 SYMLINK libspdk_ftl.so 00:02:34.434 LIB libspdk_nvmf.a 00:02:34.434 SO libspdk_nvmf.so.18.0 00:02:34.696 LIB libspdk_vhost.a 00:02:34.696 SO libspdk_vhost.so.8.0 00:02:34.696 SYMLINK libspdk_nvmf.so 00:02:34.696 SYMLINK libspdk_vhost.so 00:02:34.957 LIB libspdk_iscsi.a 00:02:34.957 SO libspdk_iscsi.so.8.0 00:02:35.218 SYMLINK libspdk_iscsi.so 00:02:35.790 CC module/env_dpdk/env_dpdk_rpc.o 00:02:35.790 LIB libspdk_env_dpdk_rpc.a 00:02:35.790 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:35.790 CC module/blob/bdev/blob_bdev.o 00:02:35.790 SO libspdk_env_dpdk_rpc.so.6.0 00:02:35.790 CC module/accel/ioat/accel_ioat.o 00:02:35.790 CC module/accel/ioat/accel_ioat_rpc.o 00:02:35.790 CC module/sock/posix/posix.o 00:02:35.790 CC module/accel/iaa/accel_iaa.o 00:02:35.790 CC module/accel/error/accel_error.o 00:02:35.790 CC module/accel/dsa/accel_dsa.o 00:02:35.790 CC module/accel/iaa/accel_iaa_rpc.o 00:02:35.790 CC module/accel/error/accel_error_rpc.o 00:02:35.790 CC module/accel/dsa/accel_dsa_rpc.o 00:02:35.790 CC module/keyring/file/keyring.o 00:02:35.790 CC module/keyring/file/keyring_rpc.o 00:02:35.790 CC module/scheduler/gscheduler/gscheduler.o 00:02:35.790 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:35.790 SYMLINK libspdk_env_dpdk_rpc.so 00:02:36.051 LIB libspdk_keyring_file.a 00:02:36.051 LIB libspdk_scheduler_dynamic.a 00:02:36.051 LIB libspdk_scheduler_gscheduler.a 00:02:36.051 LIB libspdk_scheduler_dpdk_governor.a 00:02:36.051 LIB libspdk_accel_error.a 00:02:36.051 LIB libspdk_accel_ioat.a 00:02:36.051 LIB libspdk_accel_iaa.a 00:02:36.051 SO libspdk_scheduler_dynamic.so.4.0 00:02:36.051 SO libspdk_scheduler_gscheduler.so.4.0 00:02:36.051 SO libspdk_keyring_file.so.1.0 00:02:36.051 SO libspdk_accel_ioat.so.6.0 00:02:36.051 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:36.051 SO libspdk_accel_error.so.2.0 00:02:36.051 LIB libspdk_blob_bdev.a 00:02:36.051 LIB libspdk_accel_dsa.a 00:02:36.051 SO libspdk_accel_iaa.so.3.0 00:02:36.051 SYMLINK libspdk_keyring_file.so 00:02:36.051 SO libspdk_blob_bdev.so.11.0 00:02:36.051 SO libspdk_accel_dsa.so.5.0 00:02:36.051 SYMLINK libspdk_scheduler_gscheduler.so 00:02:36.051 SYMLINK libspdk_accel_ioat.so 00:02:36.051 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:36.051 SYMLINK libspdk_scheduler_dynamic.so 00:02:36.051 SYMLINK libspdk_accel_error.so 00:02:36.312 SYMLINK libspdk_accel_iaa.so 00:02:36.312 SYMLINK libspdk_blob_bdev.so 00:02:36.312 SYMLINK libspdk_accel_dsa.so 00:02:36.573 LIB libspdk_sock_posix.a 00:02:36.573 SO libspdk_sock_posix.so.6.0 00:02:36.573 SYMLINK libspdk_sock_posix.so 00:02:36.833 CC module/bdev/aio/bdev_aio.o 00:02:36.833 CC module/bdev/delay/vbdev_delay.o 00:02:36.833 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:36.833 CC module/bdev/aio/bdev_aio_rpc.o 00:02:36.833 CC module/bdev/passthru/vbdev_passthru.o 00:02:36.833 CC module/bdev/null/bdev_null.o 00:02:36.833 CC module/bdev/null/bdev_null_rpc.o 00:02:36.833 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:36.833 CC module/bdev/malloc/bdev_malloc.o 00:02:36.833 CC module/bdev/error/vbdev_error.o 00:02:36.833 CC module/bdev/ftl/bdev_ftl.o 00:02:36.833 CC module/bdev/gpt/gpt.o 00:02:36.833 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:36.833 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:36.833 CC module/bdev/error/vbdev_error_rpc.o 00:02:36.833 CC module/bdev/iscsi/bdev_iscsi.o 00:02:36.833 CC module/bdev/gpt/vbdev_gpt.o 00:02:36.833 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:36.833 CC module/bdev/lvol/vbdev_lvol.o 00:02:36.833 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:36.833 CC module/bdev/nvme/bdev_nvme.o 00:02:36.833 CC module/blobfs/bdev/blobfs_bdev.o 00:02:36.833 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:36.833 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:36.833 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:36.833 CC module/bdev/split/vbdev_split.o 00:02:36.833 CC module/bdev/raid/bdev_raid.o 00:02:36.833 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:36.833 CC module/bdev/nvme/nvme_rpc.o 00:02:36.833 CC module/bdev/nvme/vbdev_opal.o 00:02:36.833 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:36.833 CC module/bdev/raid/bdev_raid_rpc.o 00:02:36.833 CC module/bdev/nvme/bdev_mdns_client.o 00:02:36.833 CC module/bdev/raid/bdev_raid_sb.o 00:02:36.833 CC module/bdev/split/vbdev_split_rpc.o 00:02:36.833 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:36.833 CC module/bdev/raid/raid0.o 00:02:36.833 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:36.833 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:36.833 CC module/bdev/raid/raid1.o 00:02:36.833 CC module/bdev/raid/concat.o 00:02:36.833 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:37.093 LIB libspdk_blobfs_bdev.a 00:02:37.093 SO libspdk_blobfs_bdev.so.6.0 00:02:37.093 LIB libspdk_bdev_null.a 00:02:37.093 LIB libspdk_bdev_error.a 00:02:37.093 LIB libspdk_bdev_split.a 00:02:37.093 LIB libspdk_bdev_passthru.a 00:02:37.093 SO libspdk_bdev_error.so.6.0 00:02:37.093 LIB libspdk_bdev_ftl.a 00:02:37.093 SO libspdk_bdev_null.so.6.0 00:02:37.093 LIB libspdk_bdev_gpt.a 00:02:37.093 SYMLINK libspdk_blobfs_bdev.so 00:02:37.094 LIB libspdk_bdev_aio.a 00:02:37.094 SO libspdk_bdev_split.so.6.0 00:02:37.094 SO libspdk_bdev_passthru.so.6.0 00:02:37.094 SO libspdk_bdev_ftl.so.6.0 00:02:37.094 SO libspdk_bdev_aio.so.6.0 00:02:37.094 LIB libspdk_bdev_zone_block.a 00:02:37.094 LIB libspdk_bdev_malloc.a 00:02:37.094 SO libspdk_bdev_gpt.so.6.0 00:02:37.094 LIB libspdk_bdev_iscsi.a 00:02:37.094 LIB libspdk_bdev_delay.a 00:02:37.094 SYMLINK libspdk_bdev_error.so 00:02:37.094 SYMLINK libspdk_bdev_null.so 00:02:37.094 SYMLINK libspdk_bdev_split.so 00:02:37.094 SYMLINK libspdk_bdev_passthru.so 00:02:37.094 SO libspdk_bdev_zone_block.so.6.0 00:02:37.094 SO libspdk_bdev_malloc.so.6.0 00:02:37.094 SO libspdk_bdev_iscsi.so.6.0 00:02:37.094 SO libspdk_bdev_delay.so.6.0 00:02:37.355 SYMLINK libspdk_bdev_aio.so 00:02:37.355 SYMLINK libspdk_bdev_ftl.so 00:02:37.355 SYMLINK libspdk_bdev_gpt.so 00:02:37.355 LIB libspdk_bdev_lvol.a 00:02:37.355 SYMLINK libspdk_bdev_malloc.so 00:02:37.355 SYMLINK libspdk_bdev_zone_block.so 00:02:37.355 SYMLINK libspdk_bdev_delay.so 00:02:37.355 SYMLINK libspdk_bdev_iscsi.so 00:02:37.355 LIB libspdk_bdev_virtio.a 00:02:37.355 SO libspdk_bdev_lvol.so.6.0 00:02:37.355 SO libspdk_bdev_virtio.so.6.0 00:02:37.355 SYMLINK libspdk_bdev_lvol.so 00:02:37.355 SYMLINK libspdk_bdev_virtio.so 00:02:37.615 LIB libspdk_bdev_raid.a 00:02:37.615 SO libspdk_bdev_raid.so.6.0 00:02:37.876 SYMLINK libspdk_bdev_raid.so 00:02:38.817 LIB libspdk_bdev_nvme.a 00:02:38.817 SO libspdk_bdev_nvme.so.7.0 00:02:38.817 SYMLINK libspdk_bdev_nvme.so 00:02:39.760 CC module/event/subsystems/vmd/vmd.o 00:02:39.760 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:39.760 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:39.760 CC module/event/subsystems/sock/sock.o 00:02:39.760 CC module/event/subsystems/iobuf/iobuf.o 00:02:39.760 CC module/event/subsystems/keyring/keyring.o 00:02:39.760 CC module/event/subsystems/scheduler/scheduler.o 00:02:39.760 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:39.760 LIB libspdk_event_vmd.a 00:02:39.760 LIB libspdk_event_sock.a 00:02:39.760 LIB libspdk_event_scheduler.a 00:02:39.760 LIB libspdk_event_vhost_blk.a 00:02:39.760 LIB libspdk_event_keyring.a 00:02:39.760 LIB libspdk_event_iobuf.a 00:02:39.760 SO libspdk_event_sock.so.5.0 00:02:39.760 SO libspdk_event_scheduler.so.4.0 00:02:39.760 SO libspdk_event_vmd.so.6.0 00:02:39.760 SO libspdk_event_vhost_blk.so.3.0 00:02:39.760 SO libspdk_event_keyring.so.1.0 00:02:39.760 SO libspdk_event_iobuf.so.3.0 00:02:39.760 SYMLINK libspdk_event_sock.so 00:02:39.760 SYMLINK libspdk_event_scheduler.so 00:02:39.760 SYMLINK libspdk_event_keyring.so 00:02:40.020 SYMLINK libspdk_event_vhost_blk.so 00:02:40.020 SYMLINK libspdk_event_vmd.so 00:02:40.020 SYMLINK libspdk_event_iobuf.so 00:02:40.281 CC module/event/subsystems/accel/accel.o 00:02:40.542 LIB libspdk_event_accel.a 00:02:40.542 SO libspdk_event_accel.so.6.0 00:02:40.542 SYMLINK libspdk_event_accel.so 00:02:40.802 CC module/event/subsystems/bdev/bdev.o 00:02:41.063 LIB libspdk_event_bdev.a 00:02:41.063 SO libspdk_event_bdev.so.6.0 00:02:41.063 SYMLINK libspdk_event_bdev.so 00:02:41.636 CC module/event/subsystems/nbd/nbd.o 00:02:41.636 CC module/event/subsystems/scsi/scsi.o 00:02:41.636 CC module/event/subsystems/ublk/ublk.o 00:02:41.636 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:41.636 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:41.636 LIB libspdk_event_nbd.a 00:02:41.636 LIB libspdk_event_ublk.a 00:02:41.636 LIB libspdk_event_scsi.a 00:02:41.636 SO libspdk_event_nbd.so.6.0 00:02:41.636 SO libspdk_event_ublk.so.3.0 00:02:41.636 SO libspdk_event_scsi.so.6.0 00:02:41.897 LIB libspdk_event_nvmf.a 00:02:41.897 SYMLINK libspdk_event_nbd.so 00:02:41.897 SYMLINK libspdk_event_ublk.so 00:02:41.897 SYMLINK libspdk_event_scsi.so 00:02:41.897 SO libspdk_event_nvmf.so.6.0 00:02:41.897 SYMLINK libspdk_event_nvmf.so 00:02:42.158 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:42.158 CC module/event/subsystems/iscsi/iscsi.o 00:02:42.419 LIB libspdk_event_vhost_scsi.a 00:02:42.419 LIB libspdk_event_iscsi.a 00:02:42.419 SO libspdk_event_vhost_scsi.so.3.0 00:02:42.419 SO libspdk_event_iscsi.so.6.0 00:02:42.419 SYMLINK libspdk_event_vhost_scsi.so 00:02:42.419 SYMLINK libspdk_event_iscsi.so 00:02:42.681 SO libspdk.so.6.0 00:02:42.681 SYMLINK libspdk.so 00:02:42.941 CC app/spdk_nvme_identify/identify.o 00:02:42.941 CC app/spdk_nvme_discover/discovery_aer.o 00:02:43.214 CC app/trace_record/trace_record.o 00:02:43.215 CC app/spdk_top/spdk_top.o 00:02:43.215 TEST_HEADER include/spdk/accel_module.h 00:02:43.215 TEST_HEADER include/spdk/base64.h 00:02:43.215 CC app/spdk_nvme_perf/perf.o 00:02:43.215 CXX app/trace/trace.o 00:02:43.215 TEST_HEADER include/spdk/barrier.h 00:02:43.215 TEST_HEADER include/spdk/assert.h 00:02:43.215 TEST_HEADER include/spdk/accel.h 00:02:43.215 TEST_HEADER include/spdk/bdev_module.h 00:02:43.215 TEST_HEADER include/spdk/bdev.h 00:02:43.215 CC app/spdk_lspci/spdk_lspci.o 00:02:43.215 TEST_HEADER include/spdk/bdev_zone.h 00:02:43.215 TEST_HEADER include/spdk/bit_pool.h 00:02:43.215 TEST_HEADER include/spdk/blob_bdev.h 00:02:43.215 TEST_HEADER include/spdk/bit_array.h 00:02:43.215 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:43.215 TEST_HEADER include/spdk/config.h 00:02:43.215 TEST_HEADER include/spdk/blob.h 00:02:43.215 TEST_HEADER include/spdk/conf.h 00:02:43.215 TEST_HEADER include/spdk/crc16.h 00:02:43.215 TEST_HEADER include/spdk/cpuset.h 00:02:43.215 TEST_HEADER include/spdk/blobfs.h 00:02:43.215 TEST_HEADER include/spdk/crc32.h 00:02:43.215 TEST_HEADER include/spdk/crc64.h 00:02:43.215 TEST_HEADER include/spdk/dif.h 00:02:43.215 TEST_HEADER include/spdk/dma.h 00:02:43.215 TEST_HEADER include/spdk/endian.h 00:02:43.215 TEST_HEADER include/spdk/env_dpdk.h 00:02:43.215 TEST_HEADER include/spdk/env.h 00:02:43.215 TEST_HEADER include/spdk/fd_group.h 00:02:43.215 TEST_HEADER include/spdk/fd.h 00:02:43.215 TEST_HEADER include/spdk/event.h 00:02:43.215 TEST_HEADER include/spdk/file.h 00:02:43.215 CC test/rpc_client/rpc_client_test.o 00:02:43.215 TEST_HEADER include/spdk/ftl.h 00:02:43.215 TEST_HEADER include/spdk/gpt_spec.h 00:02:43.215 TEST_HEADER include/spdk/hexlify.h 00:02:43.215 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:43.215 TEST_HEADER include/spdk/histogram_data.h 00:02:43.215 TEST_HEADER include/spdk/idxd.h 00:02:43.215 TEST_HEADER include/spdk/init.h 00:02:43.215 TEST_HEADER include/spdk/ioat.h 00:02:43.215 TEST_HEADER include/spdk/idxd_spec.h 00:02:43.215 TEST_HEADER include/spdk/iscsi_spec.h 00:02:43.215 TEST_HEADER include/spdk/ioat_spec.h 00:02:43.215 CC app/nvmf_tgt/nvmf_main.o 00:02:43.215 TEST_HEADER include/spdk/keyring.h 00:02:43.215 CC app/vhost/vhost.o 00:02:43.215 TEST_HEADER include/spdk/keyring_module.h 00:02:43.215 TEST_HEADER include/spdk/json.h 00:02:43.215 TEST_HEADER include/spdk/jsonrpc.h 00:02:43.215 TEST_HEADER include/spdk/log.h 00:02:43.215 TEST_HEADER include/spdk/likely.h 00:02:43.215 CC app/spdk_dd/spdk_dd.o 00:02:43.215 TEST_HEADER include/spdk/lvol.h 00:02:43.215 TEST_HEADER include/spdk/memory.h 00:02:43.215 TEST_HEADER include/spdk/mmio.h 00:02:43.215 TEST_HEADER include/spdk/notify.h 00:02:43.215 TEST_HEADER include/spdk/nbd.h 00:02:43.215 TEST_HEADER include/spdk/nvme.h 00:02:43.215 TEST_HEADER include/spdk/nvme_intel.h 00:02:43.215 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:43.215 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:43.215 TEST_HEADER include/spdk/nvme_zns.h 00:02:43.215 TEST_HEADER include/spdk/nvme_spec.h 00:02:43.215 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:43.215 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:43.215 TEST_HEADER include/spdk/nvmf.h 00:02:43.215 TEST_HEADER include/spdk/nvmf_spec.h 00:02:43.215 CC app/iscsi_tgt/iscsi_tgt.o 00:02:43.215 TEST_HEADER include/spdk/nvmf_transport.h 00:02:43.215 TEST_HEADER include/spdk/opal.h 00:02:43.215 TEST_HEADER include/spdk/opal_spec.h 00:02:43.215 TEST_HEADER include/spdk/pci_ids.h 00:02:43.215 TEST_HEADER include/spdk/pipe.h 00:02:43.215 TEST_HEADER include/spdk/reduce.h 00:02:43.215 TEST_HEADER include/spdk/rpc.h 00:02:43.215 TEST_HEADER include/spdk/queue.h 00:02:43.215 TEST_HEADER include/spdk/scsi.h 00:02:43.215 TEST_HEADER include/spdk/scsi_spec.h 00:02:43.215 TEST_HEADER include/spdk/scheduler.h 00:02:43.215 TEST_HEADER include/spdk/sock.h 00:02:43.215 TEST_HEADER include/spdk/string.h 00:02:43.215 TEST_HEADER include/spdk/stdinc.h 00:02:43.215 TEST_HEADER include/spdk/trace.h 00:02:43.215 TEST_HEADER include/spdk/thread.h 00:02:43.215 TEST_HEADER include/spdk/trace_parser.h 00:02:43.215 TEST_HEADER include/spdk/tree.h 00:02:43.215 TEST_HEADER include/spdk/ublk.h 00:02:43.215 TEST_HEADER include/spdk/util.h 00:02:43.215 TEST_HEADER include/spdk/uuid.h 00:02:43.215 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:43.215 TEST_HEADER include/spdk/version.h 00:02:43.215 CC app/spdk_tgt/spdk_tgt.o 00:02:43.215 TEST_HEADER include/spdk/vhost.h 00:02:43.215 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:43.215 TEST_HEADER include/spdk/vmd.h 00:02:43.215 TEST_HEADER include/spdk/zipf.h 00:02:43.215 TEST_HEADER include/spdk/xor.h 00:02:43.215 CXX test/cpp_headers/accel.o 00:02:43.215 CXX test/cpp_headers/accel_module.o 00:02:43.215 CXX test/cpp_headers/assert.o 00:02:43.215 CXX test/cpp_headers/barrier.o 00:02:43.215 CXX test/cpp_headers/base64.o 00:02:43.215 CXX test/cpp_headers/bdev.o 00:02:43.215 CXX test/cpp_headers/bdev_zone.o 00:02:43.215 CXX test/cpp_headers/bdev_module.o 00:02:43.215 CXX test/cpp_headers/bit_array.o 00:02:43.215 CXX test/cpp_headers/blob_bdev.o 00:02:43.215 CXX test/cpp_headers/bit_pool.o 00:02:43.215 CXX test/cpp_headers/blobfs_bdev.o 00:02:43.215 CXX test/cpp_headers/blobfs.o 00:02:43.215 CXX test/cpp_headers/blob.o 00:02:43.215 CXX test/cpp_headers/cpuset.o 00:02:43.215 CXX test/cpp_headers/conf.o 00:02:43.215 CXX test/cpp_headers/config.o 00:02:43.215 CXX test/cpp_headers/crc16.o 00:02:43.215 CXX test/cpp_headers/crc32.o 00:02:43.215 CXX test/cpp_headers/crc64.o 00:02:43.215 CXX test/cpp_headers/dif.o 00:02:43.215 CXX test/cpp_headers/dma.o 00:02:43.215 CXX test/cpp_headers/endian.o 00:02:43.215 CXX test/cpp_headers/env_dpdk.o 00:02:43.215 CXX test/cpp_headers/env.o 00:02:43.215 CXX test/cpp_headers/event.o 00:02:43.215 CXX test/cpp_headers/fd_group.o 00:02:43.215 CXX test/cpp_headers/fd.o 00:02:43.215 CXX test/cpp_headers/ftl.o 00:02:43.215 CXX test/cpp_headers/file.o 00:02:43.215 CXX test/cpp_headers/gpt_spec.o 00:02:43.215 CXX test/cpp_headers/hexlify.o 00:02:43.215 CXX test/cpp_headers/histogram_data.o 00:02:43.215 CXX test/cpp_headers/idxd.o 00:02:43.215 CXX test/cpp_headers/idxd_spec.o 00:02:43.215 CXX test/cpp_headers/ioat.o 00:02:43.215 CXX test/cpp_headers/init.o 00:02:43.215 CXX test/cpp_headers/ioat_spec.o 00:02:43.215 CXX test/cpp_headers/jsonrpc.o 00:02:43.215 CXX test/cpp_headers/iscsi_spec.o 00:02:43.215 CXX test/cpp_headers/json.o 00:02:43.215 CXX test/cpp_headers/keyring_module.o 00:02:43.215 CXX test/cpp_headers/keyring.o 00:02:43.215 CXX test/cpp_headers/likely.o 00:02:43.215 CXX test/cpp_headers/log.o 00:02:43.215 CXX test/cpp_headers/lvol.o 00:02:43.215 CXX test/cpp_headers/mmio.o 00:02:43.215 CXX test/cpp_headers/memory.o 00:02:43.215 CXX test/cpp_headers/notify.o 00:02:43.215 CXX test/cpp_headers/nbd.o 00:02:43.215 CXX test/cpp_headers/nvme.o 00:02:43.215 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:43.215 CXX test/cpp_headers/nvme_ocssd.o 00:02:43.215 CXX test/cpp_headers/nvme_intel.o 00:02:43.215 CXX test/cpp_headers/nvme_spec.o 00:02:43.215 CXX test/cpp_headers/nvmf_cmd.o 00:02:43.215 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:43.215 CXX test/cpp_headers/nvme_zns.o 00:02:43.215 CXX test/cpp_headers/nvmf.o 00:02:43.215 CXX test/cpp_headers/nvmf_spec.o 00:02:43.215 CXX test/cpp_headers/opal.o 00:02:43.215 CXX test/cpp_headers/nvmf_transport.o 00:02:43.215 CXX test/cpp_headers/opal_spec.o 00:02:43.215 CXX test/cpp_headers/pci_ids.o 00:02:43.215 CC examples/sock/hello_world/hello_sock.o 00:02:43.215 CXX test/cpp_headers/queue.o 00:02:43.215 CXX test/cpp_headers/pipe.o 00:02:43.215 CXX test/cpp_headers/reduce.o 00:02:43.215 CXX test/cpp_headers/scheduler.o 00:02:43.215 CXX test/cpp_headers/rpc.o 00:02:43.215 CXX test/cpp_headers/scsi.o 00:02:43.215 CC examples/nvme/hotplug/hotplug.o 00:02:43.215 CC examples/idxd/perf/perf.o 00:02:43.215 CC examples/vmd/lsvmd/lsvmd.o 00:02:43.215 CC examples/ioat/perf/perf.o 00:02:43.215 CC examples/vmd/led/led.o 00:02:43.215 CC examples/nvme/reconnect/reconnect.o 00:02:43.215 CC test/event/reactor/reactor.o 00:02:43.215 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:43.215 CC examples/accel/perf/accel_perf.o 00:02:43.486 CC examples/nvme/abort/abort.o 00:02:43.486 CC examples/nvme/hello_world/hello_world.o 00:02:43.486 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:43.486 CXX test/cpp_headers/scsi_spec.o 00:02:43.486 CC examples/util/zipf/zipf.o 00:02:43.486 CC test/env/memory/memory_ut.o 00:02:43.486 CC test/event/event_perf/event_perf.o 00:02:43.486 CC examples/ioat/verify/verify.o 00:02:43.486 CC examples/nvme/arbitration/arbitration.o 00:02:43.486 CC test/event/reactor_perf/reactor_perf.o 00:02:43.486 CC test/env/pci/pci_ut.o 00:02:43.486 CC test/nvme/overhead/overhead.o 00:02:43.486 CC examples/bdev/hello_world/hello_bdev.o 00:02:43.486 CC app/fio/nvme/fio_plugin.o 00:02:43.486 CC test/nvme/e2edp/nvme_dp.o 00:02:43.486 CC test/nvme/reserve/reserve.o 00:02:43.486 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:43.486 CC test/env/vtophys/vtophys.o 00:02:43.486 CC test/nvme/sgl/sgl.o 00:02:43.486 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:43.486 CC examples/blob/cli/blobcli.o 00:02:43.486 CXX test/cpp_headers/sock.o 00:02:43.486 CC examples/blob/hello_world/hello_blob.o 00:02:43.486 CC test/app/histogram_perf/histogram_perf.o 00:02:43.486 CC test/nvme/aer/aer.o 00:02:43.486 CC test/nvme/startup/startup.o 00:02:43.487 CC test/nvme/reset/reset.o 00:02:43.487 CC test/nvme/simple_copy/simple_copy.o 00:02:43.487 CC test/event/app_repeat/app_repeat.o 00:02:43.487 CC examples/bdev/bdevperf/bdevperf.o 00:02:43.487 CC test/app/jsoncat/jsoncat.o 00:02:43.487 CC test/nvme/connect_stress/connect_stress.o 00:02:43.487 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:43.487 CC test/accel/dif/dif.o 00:02:43.487 CC examples/thread/thread/thread_ex.o 00:02:43.487 CC test/nvme/err_injection/err_injection.o 00:02:43.487 CC test/nvme/boot_partition/boot_partition.o 00:02:43.487 CC test/app/stub/stub.o 00:02:43.487 CC examples/nvmf/nvmf/nvmf.o 00:02:43.487 CC test/nvme/fused_ordering/fused_ordering.o 00:02:43.487 CC test/dma/test_dma/test_dma.o 00:02:43.487 CC test/nvme/compliance/nvme_compliance.o 00:02:43.487 CC test/thread/poller_perf/poller_perf.o 00:02:43.487 CC test/event/scheduler/scheduler.o 00:02:43.487 CC test/nvme/cuse/cuse.o 00:02:43.487 CC test/app/bdev_svc/bdev_svc.o 00:02:43.487 CC test/nvme/fdp/fdp.o 00:02:43.487 CC test/bdev/bdevio/bdevio.o 00:02:43.487 CC app/fio/bdev/fio_plugin.o 00:02:43.487 CC test/blobfs/mkfs/mkfs.o 00:02:43.487 LINK spdk_lspci 00:02:43.751 LINK nvmf_tgt 00:02:43.751 LINK interrupt_tgt 00:02:43.751 LINK rpc_client_test 00:02:43.751 LINK spdk_nvme_discover 00:02:43.751 LINK vhost 00:02:43.751 LINK iscsi_tgt 00:02:43.751 CC test/env/mem_callbacks/mem_callbacks.o 00:02:43.751 CC test/lvol/esnap/esnap.o 00:02:44.012 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:44.012 LINK lsvmd 00:02:44.012 LINK reactor_perf 00:02:44.012 LINK reactor 00:02:44.012 LINK zipf 00:02:44.012 LINK spdk_tgt 00:02:44.012 LINK spdk_trace_record 00:02:44.012 LINK app_repeat 00:02:44.012 LINK vtophys 00:02:44.012 LINK led 00:02:44.012 LINK jsoncat 00:02:44.012 LINK cmb_copy 00:02:44.012 LINK event_perf 00:02:44.012 LINK startup 00:02:44.012 LINK histogram_perf 00:02:44.012 LINK poller_perf 00:02:44.012 LINK boot_partition 00:02:44.012 CXX test/cpp_headers/stdinc.o 00:02:44.012 CXX test/cpp_headers/string.o 00:02:44.012 LINK bdev_svc 00:02:44.012 LINK reserve 00:02:44.012 CXX test/cpp_headers/thread.o 00:02:44.012 LINK pmr_persistence 00:02:44.012 LINK connect_stress 00:02:44.012 CXX test/cpp_headers/trace.o 00:02:44.012 CXX test/cpp_headers/trace_parser.o 00:02:44.012 CXX test/cpp_headers/tree.o 00:02:44.012 CXX test/cpp_headers/ublk.o 00:02:44.012 LINK ioat_perf 00:02:44.012 LINK env_dpdk_post_init 00:02:44.012 CXX test/cpp_headers/util.o 00:02:44.012 CXX test/cpp_headers/uuid.o 00:02:44.012 CXX test/cpp_headers/version.o 00:02:44.012 CXX test/cpp_headers/vfio_user_pci.o 00:02:44.012 CXX test/cpp_headers/vfio_user_spec.o 00:02:44.012 CXX test/cpp_headers/vhost.o 00:02:44.012 LINK doorbell_aers 00:02:44.012 LINK verify 00:02:44.012 CXX test/cpp_headers/vmd.o 00:02:44.012 CXX test/cpp_headers/xor.o 00:02:44.012 LINK hello_world 00:02:44.012 CXX test/cpp_headers/zipf.o 00:02:44.012 LINK stub 00:02:44.012 LINK hello_sock 00:02:44.012 LINK hello_blob 00:02:44.012 LINK err_injection 00:02:44.012 LINK mkfs 00:02:44.012 LINK spdk_dd 00:02:44.012 LINK hotplug 00:02:44.012 LINK simple_copy 00:02:44.012 LINK fused_ordering 00:02:44.012 LINK hello_bdev 00:02:44.271 LINK scheduler 00:02:44.271 LINK nvme_dp 00:02:44.271 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:44.271 LINK overhead 00:02:44.271 LINK thread 00:02:44.271 LINK reset 00:02:44.271 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:44.271 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:44.271 LINK nvme_compliance 00:02:44.271 LINK sgl 00:02:44.271 LINK aer 00:02:44.271 LINK nvmf 00:02:44.271 LINK arbitration 00:02:44.271 LINK idxd_perf 00:02:44.271 LINK reconnect 00:02:44.271 LINK fdp 00:02:44.271 LINK abort 00:02:44.271 LINK spdk_trace 00:02:44.271 LINK pci_ut 00:02:44.271 LINK dif 00:02:44.271 LINK test_dma 00:02:44.271 LINK accel_perf 00:02:44.271 LINK bdevio 00:02:44.532 LINK spdk_nvme 00:02:44.532 LINK nvme_manage 00:02:44.532 LINK nvme_fuzz 00:02:44.532 LINK blobcli 00:02:44.532 LINK spdk_bdev 00:02:44.532 LINK spdk_nvme_identify 00:02:44.533 LINK spdk_nvme_perf 00:02:44.533 LINK vhost_fuzz 00:02:44.533 LINK spdk_top 00:02:44.533 LINK bdevperf 00:02:44.533 LINK mem_callbacks 00:02:44.794 LINK memory_ut 00:02:45.055 LINK cuse 00:02:45.685 LINK iscsi_fuzz 00:02:48.252 LINK esnap 00:02:48.252 00:02:48.252 real 0m50.561s 00:02:48.252 user 6m33.304s 00:02:48.252 sys 4m33.416s 00:02:48.252 19:54:40 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:48.252 19:54:40 make -- common/autotest_common.sh@10 -- $ set +x 00:02:48.252 ************************************ 00:02:48.252 END TEST make 00:02:48.252 ************************************ 00:02:48.252 19:54:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:48.252 19:54:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:48.252 19:54:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:48.252 19:54:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.252 19:54:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:48.252 19:54:40 -- pm/common@44 -- $ pid=3869171 00:02:48.252 19:54:40 -- pm/common@50 -- $ kill -TERM 3869171 00:02:48.252 19:54:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.252 19:54:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:48.252 19:54:40 -- pm/common@44 -- $ pid=3869172 00:02:48.252 19:54:40 -- pm/common@50 -- $ kill -TERM 3869172 00:02:48.252 19:54:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.252 19:54:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:48.252 19:54:40 -- pm/common@44 -- $ pid=3869174 00:02:48.252 19:54:40 -- pm/common@50 -- $ kill -TERM 3869174 00:02:48.252 19:54:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.252 19:54:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:48.252 19:54:40 -- pm/common@44 -- $ pid=3869198 00:02:48.252 19:54:40 -- pm/common@50 -- $ sudo -E kill -TERM 3869198 00:02:48.514 19:54:40 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:48.514 19:54:40 -- nvmf/common.sh@7 -- # uname -s 00:02:48.514 19:54:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:48.514 19:54:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:48.514 19:54:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:48.514 19:54:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:48.514 19:54:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:48.514 19:54:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:48.514 19:54:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:48.514 19:54:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:48.514 19:54:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:48.514 19:54:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:48.514 19:54:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:48.514 19:54:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:48.514 19:54:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:48.514 19:54:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:48.514 19:54:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:48.514 19:54:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:48.514 19:54:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:48.514 19:54:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:48.514 19:54:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:48.514 19:54:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:48.514 19:54:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.514 19:54:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.514 19:54:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.514 19:54:40 -- paths/export.sh@5 -- # export PATH 00:02:48.514 19:54:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.514 19:54:40 -- nvmf/common.sh@47 -- # : 0 00:02:48.514 19:54:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:48.514 19:54:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:48.514 19:54:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:48.514 19:54:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:48.514 19:54:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:48.514 19:54:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:48.514 19:54:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:48.514 19:54:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:48.514 19:54:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:48.514 19:54:40 -- spdk/autotest.sh@32 -- # uname -s 00:02:48.514 19:54:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:48.514 19:54:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:48.514 19:54:40 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:48.514 19:54:40 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:48.514 19:54:40 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:48.514 19:54:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:48.514 19:54:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:48.514 19:54:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:48.514 19:54:40 -- spdk/autotest.sh@48 -- # udevadm_pid=3931452 00:02:48.514 19:54:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:48.514 19:54:40 -- pm/common@17 -- # local monitor 00:02:48.514 19:54:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:48.514 19:54:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.514 19:54:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.514 19:54:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.514 19:54:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.514 19:54:40 -- pm/common@21 -- # date +%s 00:02:48.514 19:54:40 -- pm/common@21 -- # date +%s 00:02:48.514 19:54:40 -- pm/common@25 -- # sleep 1 00:02:48.514 19:54:40 -- pm/common@21 -- # date +%s 00:02:48.514 19:54:40 -- pm/common@21 -- # date +%s 00:02:48.514 19:54:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715795680 00:02:48.514 19:54:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715795680 00:02:48.514 19:54:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715795680 00:02:48.514 19:54:40 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715795680 00:02:48.514 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715795680_collect-vmstat.pm.log 00:02:48.514 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715795680_collect-cpu-load.pm.log 00:02:48.514 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715795680_collect-cpu-temp.pm.log 00:02:48.514 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715795680_collect-bmc-pm.bmc.pm.log 00:02:49.458 19:54:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:49.458 19:54:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:49.458 19:54:41 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:49.458 19:54:41 -- common/autotest_common.sh@10 -- # set +x 00:02:49.458 19:54:41 -- spdk/autotest.sh@59 -- # create_test_list 00:02:49.458 19:54:41 -- common/autotest_common.sh@744 -- # xtrace_disable 00:02:49.458 19:54:41 -- common/autotest_common.sh@10 -- # set +x 00:02:49.458 19:54:41 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:49.458 19:54:41 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:49.458 19:54:41 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:49.458 19:54:41 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:49.458 19:54:41 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:49.458 19:54:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:49.458 19:54:41 -- common/autotest_common.sh@1451 -- # uname 00:02:49.720 19:54:41 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:02:49.720 19:54:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:49.720 19:54:41 -- common/autotest_common.sh@1471 -- # uname 00:02:49.720 19:54:41 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:02:49.720 19:54:41 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:49.720 19:54:41 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:49.720 19:54:41 -- spdk/autotest.sh@72 -- # hash lcov 00:02:49.720 19:54:41 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:49.720 19:54:41 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:49.720 --rc lcov_branch_coverage=1 00:02:49.720 --rc lcov_function_coverage=1 00:02:49.720 --rc genhtml_branch_coverage=1 00:02:49.720 --rc genhtml_function_coverage=1 00:02:49.720 --rc genhtml_legend=1 00:02:49.720 --rc geninfo_all_blocks=1 00:02:49.720 ' 00:02:49.720 19:54:41 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:49.720 --rc lcov_branch_coverage=1 00:02:49.720 --rc lcov_function_coverage=1 00:02:49.720 --rc genhtml_branch_coverage=1 00:02:49.720 --rc genhtml_function_coverage=1 00:02:49.720 --rc genhtml_legend=1 00:02:49.720 --rc geninfo_all_blocks=1 00:02:49.720 ' 00:02:49.720 19:54:41 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:49.720 --rc lcov_branch_coverage=1 00:02:49.720 --rc lcov_function_coverage=1 00:02:49.720 --rc genhtml_branch_coverage=1 00:02:49.720 --rc genhtml_function_coverage=1 00:02:49.720 --rc genhtml_legend=1 00:02:49.720 --rc geninfo_all_blocks=1 00:02:49.720 --no-external' 00:02:49.720 19:54:41 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:49.720 --rc lcov_branch_coverage=1 00:02:49.720 --rc lcov_function_coverage=1 00:02:49.720 --rc genhtml_branch_coverage=1 00:02:49.720 --rc genhtml_function_coverage=1 00:02:49.720 --rc genhtml_legend=1 00:02:49.720 --rc geninfo_all_blocks=1 00:02:49.720 --no-external' 00:02:49.720 19:54:41 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:49.720 lcov: LCOV version 1.14 00:02:49.720 19:54:42 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:01.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:01.961 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:01.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:01.961 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:01.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:01.961 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:01.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:01.961 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:16.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:16.880 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:16.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:16.880 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:16.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:16.880 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:16.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:16.880 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:16.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:16.880 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:16.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:16.881 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:16.882 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:17.454 19:55:09 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:17.454 19:55:09 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:17.454 19:55:09 -- common/autotest_common.sh@10 -- # set +x 00:03:17.454 19:55:09 -- spdk/autotest.sh@91 -- # rm -f 00:03:17.454 19:55:09 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.664 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:21.664 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:21.664 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:21.664 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:21.664 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:21.664 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:21.664 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:21.664 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:21.664 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:21.664 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:21.664 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:21.664 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:21.664 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:21.664 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:21.664 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:21.664 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:21.664 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:21.923 19:55:14 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:21.923 19:55:14 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:21.924 19:55:14 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:21.924 19:55:14 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:21.924 19:55:14 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:21.924 19:55:14 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:21.924 19:55:14 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:21.924 19:55:14 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:21.924 19:55:14 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:21.924 19:55:14 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:21.924 19:55:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:21.924 19:55:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:21.924 19:55:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:21.924 19:55:14 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:21.924 19:55:14 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:21.924 No valid GPT data, bailing 00:03:22.186 19:55:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:22.186 19:55:14 -- scripts/common.sh@391 -- # pt= 00:03:22.186 19:55:14 -- scripts/common.sh@392 -- # return 1 00:03:22.186 19:55:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:22.186 1+0 records in 00:03:22.186 1+0 records out 00:03:22.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00425652 s, 246 MB/s 00:03:22.186 19:55:14 -- spdk/autotest.sh@118 -- # sync 00:03:22.186 19:55:14 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:22.186 19:55:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:22.186 19:55:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:30.332 19:55:22 -- spdk/autotest.sh@124 -- # uname -s 00:03:30.332 19:55:22 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:30.332 19:55:22 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:30.332 19:55:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:30.332 19:55:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:30.332 19:55:22 -- common/autotest_common.sh@10 -- # set +x 00:03:30.332 ************************************ 00:03:30.332 START TEST setup.sh 00:03:30.332 ************************************ 00:03:30.332 19:55:22 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:30.332 * Looking for test storage... 00:03:30.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:30.332 19:55:22 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:30.332 19:55:22 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:30.332 19:55:22 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:30.332 19:55:22 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:30.332 19:55:22 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:30.332 19:55:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:30.332 ************************************ 00:03:30.332 START TEST acl 00:03:30.332 ************************************ 00:03:30.332 19:55:22 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:30.598 * Looking for test storage... 00:03:30.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:30.598 19:55:22 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:30.598 19:55:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:30.598 19:55:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:30.598 19:55:22 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:30.598 19:55:22 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:30.598 19:55:22 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:30.598 19:55:22 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:30.598 19:55:22 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:30.599 19:55:22 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:30.599 19:55:22 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:30.599 19:55:22 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:30.599 19:55:22 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:30.599 19:55:22 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:30.599 19:55:22 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:30.599 19:55:22 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.599 19:55:22 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.897 19:55:27 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:35.897 19:55:27 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:35.897 19:55:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:35.897 19:55:27 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:35.897 19:55:27 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.897 19:55:27 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:39.202 Hugepages 00:03:39.202 node hugesize free / total 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.202 00:03:39.202 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.202 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:39.203 19:55:31 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:39.203 19:55:31 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:39.203 19:55:31 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.203 19:55:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:39.464 ************************************ 00:03:39.464 START TEST denied 00:03:39.464 ************************************ 00:03:39.464 19:55:31 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:03:39.464 19:55:31 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:39.464 19:55:31 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:39.464 19:55:31 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:39.464 19:55:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.464 19:55:31 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.672 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:43.672 19:55:35 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:43.672 19:55:35 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:43.672 19:55:35 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:43.672 19:55:35 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:43.672 19:55:35 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:43.672 19:55:35 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:43.672 19:55:35 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:43.672 19:55:35 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:43.672 19:55:35 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.672 19:55:35 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.963 00:03:48.963 real 0m9.580s 00:03:48.963 user 0m3.063s 00:03:48.963 sys 0m5.692s 00:03:48.963 19:55:41 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:48.963 19:55:41 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:48.963 ************************************ 00:03:48.963 END TEST denied 00:03:48.963 ************************************ 00:03:48.963 19:55:41 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:48.963 19:55:41 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:48.963 19:55:41 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:48.963 19:55:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:48.963 ************************************ 00:03:48.963 START TEST allowed 00:03:48.963 ************************************ 00:03:48.963 19:55:41 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:03:48.963 19:55:41 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:48.963 19:55:41 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:48.963 19:55:41 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:48.963 19:55:41 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.963 19:55:41 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:55.689 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:55.689 19:55:47 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:55.689 19:55:47 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:55.689 19:55:47 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:55.689 19:55:47 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.689 19:55:47 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.900 00:03:59.900 real 0m10.641s 00:03:59.900 user 0m3.150s 00:03:59.900 sys 0m5.801s 00:03:59.900 19:55:52 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:59.900 19:55:52 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:59.900 ************************************ 00:03:59.900 END TEST allowed 00:03:59.900 ************************************ 00:03:59.900 00:03:59.900 real 0m29.324s 00:03:59.900 user 0m9.626s 00:03:59.900 sys 0m17.399s 00:03:59.900 19:55:52 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:59.900 19:55:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:59.900 ************************************ 00:03:59.900 END TEST acl 00:03:59.900 ************************************ 00:03:59.900 19:55:52 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:59.900 19:55:52 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:59.900 19:55:52 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:59.900 19:55:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.900 ************************************ 00:03:59.900 START TEST hugepages 00:03:59.900 ************************************ 00:03:59.900 19:55:52 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:59.900 * Looking for test storage... 00:03:59.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.900 19:55:52 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 102826756 kB' 'MemAvailable: 107309984 kB' 'Buffers: 4124 kB' 'Cached: 14602912 kB' 'SwapCached: 0 kB' 'Active: 10729360 kB' 'Inactive: 4481552 kB' 'Active(anon): 10089496 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607312 kB' 'Mapped: 240548 kB' 'Shmem: 9485620 kB' 'KReclaimable: 372288 kB' 'Slab: 1250036 kB' 'SReclaimable: 372288 kB' 'SUnreclaim: 877748 kB' 'KernelStack: 27344 kB' 'PageTables: 9464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460888 kB' 'Committed_AS: 11537328 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237800 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.901 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:59.902 19:55:52 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:59.902 19:55:52 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:59.902 19:55:52 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:59.902 19:55:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.902 ************************************ 00:03:59.902 START TEST default_setup 00:03:59.902 ************************************ 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.902 19:55:52 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.206 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:03.206 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:03.206 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:03.206 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:03.206 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:03.206 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:03.206 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:03.206 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:03.206 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:03.206 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:03.467 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:03.467 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:03.467 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:03.467 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:03.467 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:03.467 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:03.467 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.732 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104991988 kB' 'MemAvailable: 109475200 kB' 'Buffers: 4124 kB' 'Cached: 14603052 kB' 'SwapCached: 0 kB' 'Active: 10746252 kB' 'Inactive: 4481552 kB' 'Active(anon): 10106388 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623944 kB' 'Mapped: 240848 kB' 'Shmem: 9485760 kB' 'KReclaimable: 372256 kB' 'Slab: 1248156 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875900 kB' 'KernelStack: 27744 kB' 'PageTables: 9720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11554864 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237992 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.733 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104992820 kB' 'MemAvailable: 109476032 kB' 'Buffers: 4124 kB' 'Cached: 14603052 kB' 'SwapCached: 0 kB' 'Active: 10746248 kB' 'Inactive: 4481552 kB' 'Active(anon): 10106384 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623956 kB' 'Mapped: 240740 kB' 'Shmem: 9485760 kB' 'KReclaimable: 372256 kB' 'Slab: 1247796 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875540 kB' 'KernelStack: 27568 kB' 'PageTables: 9800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11553272 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237864 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.734 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.735 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104990836 kB' 'MemAvailable: 109474048 kB' 'Buffers: 4124 kB' 'Cached: 14603072 kB' 'SwapCached: 0 kB' 'Active: 10746240 kB' 'Inactive: 4481552 kB' 'Active(anon): 10106376 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623884 kB' 'Mapped: 240740 kB' 'Shmem: 9485780 kB' 'KReclaimable: 372256 kB' 'Slab: 1247796 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875540 kB' 'KernelStack: 27664 kB' 'PageTables: 10216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11554904 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237976 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.736 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.737 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.738 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.003 nr_hugepages=1024 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.003 resv_hugepages=0 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.003 surplus_hugepages=0 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.003 anon_hugepages=0 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.003 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104990492 kB' 'MemAvailable: 109473704 kB' 'Buffers: 4124 kB' 'Cached: 14603092 kB' 'SwapCached: 0 kB' 'Active: 10746792 kB' 'Inactive: 4481552 kB' 'Active(anon): 10106928 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624464 kB' 'Mapped: 240740 kB' 'Shmem: 9485800 kB' 'KReclaimable: 372256 kB' 'Slab: 1247796 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875540 kB' 'KernelStack: 27728 kB' 'PageTables: 9844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11554924 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238088 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.004 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.005 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57931996 kB' 'MemUsed: 7727012 kB' 'SwapCached: 0 kB' 'Active: 3885056 kB' 'Inactive: 156628 kB' 'Active(anon): 3720284 kB' 'Inactive(anon): 0 kB' 'Active(file): 164772 kB' 'Inactive(file): 156628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3884528 kB' 'Mapped: 47068 kB' 'AnonPages: 160312 kB' 'Shmem: 3563128 kB' 'KernelStack: 12856 kB' 'PageTables: 3532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167952 kB' 'Slab: 575384 kB' 'SReclaimable: 167952 kB' 'SUnreclaim: 407432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.006 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.007 node0=1024 expecting 1024 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.007 00:04:04.007 real 0m3.921s 00:04:04.007 user 0m1.340s 00:04:04.007 sys 0m2.508s 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:04.007 19:55:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:04.007 ************************************ 00:04:04.007 END TEST default_setup 00:04:04.007 ************************************ 00:04:04.007 19:55:56 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:04.007 19:55:56 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:04.007 19:55:56 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:04.007 19:55:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.007 ************************************ 00:04:04.007 START TEST per_node_1G_alloc 00:04:04.007 ************************************ 00:04:04.007 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:04.007 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:04.007 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:04.007 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.008 19:55:56 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:08.223 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:08.223 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:08.223 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:08.223 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:08.223 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:08.223 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:08.223 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:08.223 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:08.223 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:08.223 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:08.223 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:08.223 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:08.224 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:08.224 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:08.224 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:08.224 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:08.224 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104998288 kB' 'MemAvailable: 109481500 kB' 'Buffers: 4124 kB' 'Cached: 14603212 kB' 'SwapCached: 0 kB' 'Active: 10745768 kB' 'Inactive: 4481552 kB' 'Active(anon): 10105904 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623076 kB' 'Mapped: 239728 kB' 'Shmem: 9485920 kB' 'KReclaimable: 372256 kB' 'Slab: 1247688 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875432 kB' 'KernelStack: 27440 kB' 'PageTables: 9328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11539228 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237960 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.224 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104999052 kB' 'MemAvailable: 109482264 kB' 'Buffers: 4124 kB' 'Cached: 14603212 kB' 'SwapCached: 0 kB' 'Active: 10745076 kB' 'Inactive: 4481552 kB' 'Active(anon): 10105212 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622352 kB' 'Mapped: 239696 kB' 'Shmem: 9485920 kB' 'KReclaimable: 372256 kB' 'Slab: 1247688 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875432 kB' 'KernelStack: 27408 kB' 'PageTables: 9224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11539244 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237944 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.225 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.226 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104999484 kB' 'MemAvailable: 109482696 kB' 'Buffers: 4124 kB' 'Cached: 14603212 kB' 'SwapCached: 0 kB' 'Active: 10744452 kB' 'Inactive: 4481552 kB' 'Active(anon): 10104588 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622132 kB' 'Mapped: 239600 kB' 'Shmem: 9485920 kB' 'KReclaimable: 372256 kB' 'Slab: 1247628 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875372 kB' 'KernelStack: 27392 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11539268 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237944 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.227 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.228 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.493 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.494 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.495 nr_hugepages=1024 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.495 resv_hugepages=0 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.495 surplus_hugepages=0 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.495 anon_hugepages=0 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 104995988 kB' 'MemAvailable: 109479200 kB' 'Buffers: 4124 kB' 'Cached: 14603216 kB' 'SwapCached: 0 kB' 'Active: 10745844 kB' 'Inactive: 4481552 kB' 'Active(anon): 10105980 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623544 kB' 'Mapped: 239600 kB' 'Shmem: 9485924 kB' 'KReclaimable: 372256 kB' 'Slab: 1247628 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875372 kB' 'KernelStack: 27376 kB' 'PageTables: 9216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11557844 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237944 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.495 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.496 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58994788 kB' 'MemUsed: 6664220 kB' 'SwapCached: 0 kB' 'Active: 3885912 kB' 'Inactive: 156628 kB' 'Active(anon): 3721140 kB' 'Inactive(anon): 0 kB' 'Active(file): 164772 kB' 'Inactive(file): 156628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3884544 kB' 'Mapped: 47068 kB' 'AnonPages: 161328 kB' 'Shmem: 3563144 kB' 'KernelStack: 12664 kB' 'PageTables: 3036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167952 kB' 'Slab: 575216 kB' 'SReclaimable: 167952 kB' 'SUnreclaim: 407264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.497 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.498 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679868 kB' 'MemFree: 46002332 kB' 'MemUsed: 14677536 kB' 'SwapCached: 0 kB' 'Active: 6858600 kB' 'Inactive: 4324924 kB' 'Active(anon): 6383508 kB' 'Inactive(anon): 0 kB' 'Active(file): 475092 kB' 'Inactive(file): 4324924 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10722880 kB' 'Mapped: 192532 kB' 'AnonPages: 460788 kB' 'Shmem: 5922864 kB' 'KernelStack: 14632 kB' 'PageTables: 5820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204304 kB' 'Slab: 672412 kB' 'SReclaimable: 204304 kB' 'SUnreclaim: 468108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.499 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:08.500 node0=512 expecting 512 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:08.500 node1=512 expecting 512 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:08.500 00:04:08.500 real 0m4.442s 00:04:08.500 user 0m1.681s 00:04:08.500 sys 0m2.826s 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:08.500 19:56:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:08.500 ************************************ 00:04:08.500 END TEST per_node_1G_alloc 00:04:08.500 ************************************ 00:04:08.500 19:56:00 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:08.500 19:56:00 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:08.500 19:56:00 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:08.500 19:56:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.500 ************************************ 00:04:08.500 START TEST even_2G_alloc 00:04:08.500 ************************************ 00:04:08.500 19:56:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:08.500 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:08.500 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:08.500 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.501 19:56:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.714 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:12.714 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105011308 kB' 'MemAvailable: 109494520 kB' 'Buffers: 4124 kB' 'Cached: 14603412 kB' 'SwapCached: 0 kB' 'Active: 10746912 kB' 'Inactive: 4481552 kB' 'Active(anon): 10107048 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624252 kB' 'Mapped: 239672 kB' 'Shmem: 9486120 kB' 'KReclaimable: 372256 kB' 'Slab: 1247800 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875544 kB' 'KernelStack: 27536 kB' 'PageTables: 9356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11541564 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238184 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105015776 kB' 'MemAvailable: 109498988 kB' 'Buffers: 4124 kB' 'Cached: 14603412 kB' 'SwapCached: 0 kB' 'Active: 10746480 kB' 'Inactive: 4481552 kB' 'Active(anon): 10106616 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623840 kB' 'Mapped: 239628 kB' 'Shmem: 9486120 kB' 'KReclaimable: 372256 kB' 'Slab: 1247792 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875536 kB' 'KernelStack: 27664 kB' 'PageTables: 9640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11543188 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238168 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105013740 kB' 'MemAvailable: 109496952 kB' 'Buffers: 4124 kB' 'Cached: 14603432 kB' 'SwapCached: 0 kB' 'Active: 10746596 kB' 'Inactive: 4481552 kB' 'Active(anon): 10106732 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623924 kB' 'Mapped: 239628 kB' 'Shmem: 9486140 kB' 'KReclaimable: 372256 kB' 'Slab: 1247836 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875580 kB' 'KernelStack: 27616 kB' 'PageTables: 9828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11543212 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238136 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.718 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.719 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.720 nr_hugepages=1024 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.720 resv_hugepages=0 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.720 surplus_hugepages=0 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.720 anon_hugepages=0 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.720 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105016052 kB' 'MemAvailable: 109499264 kB' 'Buffers: 4124 kB' 'Cached: 14603452 kB' 'SwapCached: 0 kB' 'Active: 10746320 kB' 'Inactive: 4481552 kB' 'Active(anon): 10106456 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623584 kB' 'Mapped: 239628 kB' 'Shmem: 9486160 kB' 'KReclaimable: 372256 kB' 'Slab: 1247836 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875580 kB' 'KernelStack: 27520 kB' 'PageTables: 9364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11540384 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237960 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.721 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.985 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59017104 kB' 'MemUsed: 6641904 kB' 'SwapCached: 0 kB' 'Active: 3887460 kB' 'Inactive: 156628 kB' 'Active(anon): 3722688 kB' 'Inactive(anon): 0 kB' 'Active(file): 164772 kB' 'Inactive(file): 156628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3884616 kB' 'Mapped: 47068 kB' 'AnonPages: 162612 kB' 'Shmem: 3563216 kB' 'KernelStack: 12712 kB' 'PageTables: 3220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167952 kB' 'Slab: 575112 kB' 'SReclaimable: 167952 kB' 'SUnreclaim: 407160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.986 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.987 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679868 kB' 'MemFree: 46000720 kB' 'MemUsed: 14679148 kB' 'SwapCached: 0 kB' 'Active: 6858448 kB' 'Inactive: 4324924 kB' 'Active(anon): 6383356 kB' 'Inactive(anon): 0 kB' 'Active(file): 475092 kB' 'Inactive(file): 4324924 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10723004 kB' 'Mapped: 192548 kB' 'AnonPages: 460516 kB' 'Shmem: 5922988 kB' 'KernelStack: 14696 kB' 'PageTables: 6016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204304 kB' 'Slab: 672948 kB' 'SReclaimable: 204304 kB' 'SUnreclaim: 468644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.988 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:12.989 node0=512 expecting 512 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:12.989 node1=512 expecting 512 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:12.989 00:04:12.989 real 0m4.376s 00:04:12.989 user 0m1.704s 00:04:12.989 sys 0m2.743s 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:12.989 19:56:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:12.989 ************************************ 00:04:12.989 END TEST even_2G_alloc 00:04:12.989 ************************************ 00:04:12.989 19:56:05 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:12.989 19:56:05 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:12.989 19:56:05 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:12.989 19:56:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.989 ************************************ 00:04:12.989 START TEST odd_alloc 00:04:12.989 ************************************ 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.989 19:56:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:17.203 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:17.203 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105018972 kB' 'MemAvailable: 109502184 kB' 'Buffers: 4124 kB' 'Cached: 14603608 kB' 'SwapCached: 0 kB' 'Active: 10748568 kB' 'Inactive: 4481552 kB' 'Active(anon): 10108704 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624816 kB' 'Mapped: 239740 kB' 'Shmem: 9486316 kB' 'KReclaimable: 372256 kB' 'Slab: 1247560 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875304 kB' 'KernelStack: 27440 kB' 'PageTables: 9376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508440 kB' 'Committed_AS: 11541588 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237832 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.203 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105020500 kB' 'MemAvailable: 109503712 kB' 'Buffers: 4124 kB' 'Cached: 14603608 kB' 'SwapCached: 0 kB' 'Active: 10747576 kB' 'Inactive: 4481552 kB' 'Active(anon): 10107712 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624720 kB' 'Mapped: 239632 kB' 'Shmem: 9486316 kB' 'KReclaimable: 372256 kB' 'Slab: 1247512 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875256 kB' 'KernelStack: 27408 kB' 'PageTables: 9236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508440 kB' 'Committed_AS: 11541604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237832 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.204 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.205 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105020248 kB' 'MemAvailable: 109503460 kB' 'Buffers: 4124 kB' 'Cached: 14603628 kB' 'SwapCached: 0 kB' 'Active: 10747280 kB' 'Inactive: 4481552 kB' 'Active(anon): 10107416 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624460 kB' 'Mapped: 239632 kB' 'Shmem: 9486336 kB' 'KReclaimable: 372256 kB' 'Slab: 1247512 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875256 kB' 'KernelStack: 27392 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508440 kB' 'Committed_AS: 11541624 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237832 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.206 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.207 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:17.208 nr_hugepages=1025 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.208 resv_hugepages=0 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.208 surplus_hugepages=0 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.208 anon_hugepages=0 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105020248 kB' 'MemAvailable: 109503460 kB' 'Buffers: 4124 kB' 'Cached: 14603628 kB' 'SwapCached: 0 kB' 'Active: 10747784 kB' 'Inactive: 4481552 kB' 'Active(anon): 10107920 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624964 kB' 'Mapped: 239632 kB' 'Shmem: 9486336 kB' 'KReclaimable: 372256 kB' 'Slab: 1247512 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875256 kB' 'KernelStack: 27392 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508440 kB' 'Committed_AS: 11541648 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237832 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.208 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.209 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59025176 kB' 'MemUsed: 6633832 kB' 'SwapCached: 0 kB' 'Active: 3886620 kB' 'Inactive: 156628 kB' 'Active(anon): 3721848 kB' 'Inactive(anon): 0 kB' 'Active(file): 164772 kB' 'Inactive(file): 156628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3884684 kB' 'Mapped: 47068 kB' 'AnonPages: 161720 kB' 'Shmem: 3563284 kB' 'KernelStack: 12680 kB' 'PageTables: 3120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167952 kB' 'Slab: 574916 kB' 'SReclaimable: 167952 kB' 'SUnreclaim: 406964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.210 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679868 kB' 'MemFree: 45995028 kB' 'MemUsed: 14684840 kB' 'SwapCached: 0 kB' 'Active: 6860916 kB' 'Inactive: 4324924 kB' 'Active(anon): 6385824 kB' 'Inactive(anon): 0 kB' 'Active(file): 475092 kB' 'Inactive(file): 4324924 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10723104 kB' 'Mapped: 192564 kB' 'AnonPages: 462916 kB' 'Shmem: 5923088 kB' 'KernelStack: 14712 kB' 'PageTables: 6068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204304 kB' 'Slab: 672596 kB' 'SReclaimable: 204304 kB' 'SUnreclaim: 468292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.211 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.212 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:17.213 node0=512 expecting 513 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:17.213 node1=513 expecting 512 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:17.213 00:04:17.213 real 0m4.270s 00:04:17.213 user 0m1.630s 00:04:17.213 sys 0m2.701s 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:17.213 19:56:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:17.213 ************************************ 00:04:17.213 END TEST odd_alloc 00:04:17.213 ************************************ 00:04:17.213 19:56:09 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:17.213 19:56:09 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:17.213 19:56:09 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:17.213 19:56:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.474 ************************************ 00:04:17.474 START TEST custom_alloc 00:04:17.474 ************************************ 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:17.474 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.475 19:56:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.710 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:21.710 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:21.710 19:56:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:21.710 19:56:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:21.710 19:56:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.710 19:56:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.710 19:56:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.710 19:56:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.710 19:56:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.710 19:56:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.710 19:56:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.710 19:56:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.710 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.710 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.710 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.710 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 103958128 kB' 'MemAvailable: 108441340 kB' 'Buffers: 4124 kB' 'Cached: 14603776 kB' 'SwapCached: 0 kB' 'Active: 10750084 kB' 'Inactive: 4481552 kB' 'Active(anon): 10110220 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627092 kB' 'Mapped: 239784 kB' 'Shmem: 9486484 kB' 'KReclaimable: 372256 kB' 'Slab: 1247952 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875696 kB' 'KernelStack: 27504 kB' 'PageTables: 9276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985176 kB' 'Committed_AS: 11563516 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237976 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.711 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 103958792 kB' 'MemAvailable: 108442004 kB' 'Buffers: 4124 kB' 'Cached: 14603776 kB' 'SwapCached: 0 kB' 'Active: 10748032 kB' 'Inactive: 4481552 kB' 'Active(anon): 10108168 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625016 kB' 'Mapped: 239736 kB' 'Shmem: 9486484 kB' 'KReclaimable: 372256 kB' 'Slab: 1247480 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875224 kB' 'KernelStack: 27376 kB' 'PageTables: 9092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985176 kB' 'Committed_AS: 11542296 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237848 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.712 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.713 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 103959188 kB' 'MemAvailable: 108442400 kB' 'Buffers: 4124 kB' 'Cached: 14603800 kB' 'SwapCached: 0 kB' 'Active: 10747780 kB' 'Inactive: 4481552 kB' 'Active(anon): 10107916 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624684 kB' 'Mapped: 239652 kB' 'Shmem: 9486508 kB' 'KReclaimable: 372256 kB' 'Slab: 1247500 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875244 kB' 'KernelStack: 27376 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985176 kB' 'Committed_AS: 11542320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237848 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.714 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.715 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:21.716 nr_hugepages=1536 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.716 resv_hugepages=0 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.716 surplus_hugepages=0 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.716 anon_hugepages=0 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 103958432 kB' 'MemAvailable: 108441644 kB' 'Buffers: 4124 kB' 'Cached: 14603820 kB' 'SwapCached: 0 kB' 'Active: 10747820 kB' 'Inactive: 4481552 kB' 'Active(anon): 10107956 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624684 kB' 'Mapped: 239652 kB' 'Shmem: 9486528 kB' 'KReclaimable: 372256 kB' 'Slab: 1247500 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875244 kB' 'KernelStack: 27376 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985176 kB' 'Committed_AS: 11542348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237848 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.716 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.717 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59024856 kB' 'MemUsed: 6634152 kB' 'SwapCached: 0 kB' 'Active: 3888196 kB' 'Inactive: 156628 kB' 'Active(anon): 3723424 kB' 'Inactive(anon): 0 kB' 'Active(file): 164772 kB' 'Inactive(file): 156628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3884780 kB' 'Mapped: 47068 kB' 'AnonPages: 163184 kB' 'Shmem: 3563380 kB' 'KernelStack: 12664 kB' 'PageTables: 2980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167952 kB' 'Slab: 574848 kB' 'SReclaimable: 167952 kB' 'SUnreclaim: 406896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.718 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.719 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679868 kB' 'MemFree: 44932928 kB' 'MemUsed: 15746940 kB' 'SwapCached: 0 kB' 'Active: 6859680 kB' 'Inactive: 4324924 kB' 'Active(anon): 6384588 kB' 'Inactive(anon): 0 kB' 'Active(file): 475092 kB' 'Inactive(file): 4324924 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10723208 kB' 'Mapped: 192584 kB' 'AnonPages: 461544 kB' 'Shmem: 5923192 kB' 'KernelStack: 14728 kB' 'PageTables: 6092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 204304 kB' 'Slab: 672652 kB' 'SReclaimable: 204304 kB' 'SUnreclaim: 468348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.720 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:21.721 node0=512 expecting 512 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:21.721 node1=1024 expecting 1024 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:21.721 00:04:21.721 real 0m4.467s 00:04:21.721 user 0m1.817s 00:04:21.721 sys 0m2.717s 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:21.721 19:56:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:21.721 ************************************ 00:04:21.721 END TEST custom_alloc 00:04:21.721 ************************************ 00:04:21.982 19:56:14 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:21.982 19:56:14 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:21.982 19:56:14 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:21.982 19:56:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:21.982 ************************************ 00:04:21.982 START TEST no_shrink_alloc 00:04:21.982 ************************************ 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.982 19:56:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:26.199 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:26.199 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105015248 kB' 'MemAvailable: 109498460 kB' 'Buffers: 4124 kB' 'Cached: 14603968 kB' 'SwapCached: 0 kB' 'Active: 10752080 kB' 'Inactive: 4481552 kB' 'Active(anon): 10112216 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 628520 kB' 'Mapped: 240296 kB' 'Shmem: 9486676 kB' 'KReclaimable: 372256 kB' 'Slab: 1247756 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875500 kB' 'KernelStack: 27376 kB' 'PageTables: 9220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11546504 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237896 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.199 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.200 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105015368 kB' 'MemAvailable: 109498580 kB' 'Buffers: 4124 kB' 'Cached: 14603972 kB' 'SwapCached: 0 kB' 'Active: 10754944 kB' 'Inactive: 4481552 kB' 'Active(anon): 10115080 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631376 kB' 'Mapped: 240608 kB' 'Shmem: 9486680 kB' 'KReclaimable: 372256 kB' 'Slab: 1247748 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875492 kB' 'KernelStack: 27376 kB' 'PageTables: 9228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11549572 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237868 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.201 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.202 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105015840 kB' 'MemAvailable: 109499052 kB' 'Buffers: 4124 kB' 'Cached: 14603972 kB' 'SwapCached: 0 kB' 'Active: 10754516 kB' 'Inactive: 4481552 kB' 'Active(anon): 10114652 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631364 kB' 'Mapped: 240592 kB' 'Shmem: 9486680 kB' 'KReclaimable: 372256 kB' 'Slab: 1247744 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875488 kB' 'KernelStack: 27376 kB' 'PageTables: 9220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11549592 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237884 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.203 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.204 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.205 nr_hugepages=1024 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.205 resv_hugepages=0 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.205 surplus_hugepages=0 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.205 anon_hugepages=0 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105022728 kB' 'MemAvailable: 109505940 kB' 'Buffers: 4124 kB' 'Cached: 14604012 kB' 'SwapCached: 0 kB' 'Active: 10748740 kB' 'Inactive: 4481552 kB' 'Active(anon): 10108876 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 625468 kB' 'Mapped: 239688 kB' 'Shmem: 9486720 kB' 'KReclaimable: 372256 kB' 'Slab: 1247744 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 875488 kB' 'KernelStack: 27328 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11543496 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237896 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.205 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.206 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57995272 kB' 'MemUsed: 7663736 kB' 'SwapCached: 0 kB' 'Active: 3887888 kB' 'Inactive: 156628 kB' 'Active(anon): 3723116 kB' 'Inactive(anon): 0 kB' 'Active(file): 164772 kB' 'Inactive(file): 156628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3884888 kB' 'Mapped: 47068 kB' 'AnonPages: 162824 kB' 'Shmem: 3563488 kB' 'KernelStack: 12664 kB' 'PageTables: 3072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167952 kB' 'Slab: 575032 kB' 'SReclaimable: 167952 kB' 'SUnreclaim: 407080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.207 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.208 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.209 node0=1024 expecting 1024 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.209 19:56:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:30.471 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:30.471 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:30.471 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.471 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105011320 kB' 'MemAvailable: 109494532 kB' 'Buffers: 4124 kB' 'Cached: 14604128 kB' 'SwapCached: 0 kB' 'Active: 10751084 kB' 'Inactive: 4481552 kB' 'Active(anon): 10111220 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627608 kB' 'Mapped: 240244 kB' 'Shmem: 9486836 kB' 'KReclaimable: 372256 kB' 'Slab: 1248548 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 876292 kB' 'KernelStack: 27632 kB' 'PageTables: 9516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11548828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238072 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.472 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105012428 kB' 'MemAvailable: 109495640 kB' 'Buffers: 4124 kB' 'Cached: 14604128 kB' 'SwapCached: 0 kB' 'Active: 10755204 kB' 'Inactive: 4481552 kB' 'Active(anon): 10115340 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 631748 kB' 'Mapped: 240224 kB' 'Shmem: 9486836 kB' 'KReclaimable: 372256 kB' 'Slab: 1248548 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 876292 kB' 'KernelStack: 27680 kB' 'PageTables: 10024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11551648 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238024 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.473 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.474 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105011996 kB' 'MemAvailable: 109495208 kB' 'Buffers: 4124 kB' 'Cached: 14604148 kB' 'SwapCached: 0 kB' 'Active: 10750444 kB' 'Inactive: 4481552 kB' 'Active(anon): 10110580 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626916 kB' 'Mapped: 239712 kB' 'Shmem: 9486856 kB' 'KReclaimable: 372256 kB' 'Slab: 1248572 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 876316 kB' 'KernelStack: 27568 kB' 'PageTables: 9336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11547144 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237976 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.475 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.476 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.477 nr_hugepages=1024 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.477 resv_hugepages=0 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.477 surplus_hugepages=0 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.477 anon_hugepages=0 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338876 kB' 'MemFree: 105011812 kB' 'MemAvailable: 109495024 kB' 'Buffers: 4124 kB' 'Cached: 14604172 kB' 'SwapCached: 0 kB' 'Active: 10750096 kB' 'Inactive: 4481552 kB' 'Active(anon): 10110232 kB' 'Inactive(anon): 0 kB' 'Active(file): 639864 kB' 'Inactive(file): 4481552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 626540 kB' 'Mapped: 239712 kB' 'Shmem: 9486880 kB' 'KReclaimable: 372256 kB' 'Slab: 1248572 kB' 'SReclaimable: 372256 kB' 'SUnreclaim: 876316 kB' 'KernelStack: 27488 kB' 'PageTables: 9392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509464 kB' 'Committed_AS: 11547172 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238008 kB' 'VmallocChunk: 0 kB' 'Percpu: 130752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4441460 kB' 'DirectMap2M: 50812928 kB' 'DirectMap1G: 80740352 kB' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.477 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57989216 kB' 'MemUsed: 7669792 kB' 'SwapCached: 0 kB' 'Active: 3889268 kB' 'Inactive: 156628 kB' 'Active(anon): 3724496 kB' 'Inactive(anon): 0 kB' 'Active(file): 164772 kB' 'Inactive(file): 156628 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3884996 kB' 'Mapped: 47068 kB' 'AnonPages: 164044 kB' 'Shmem: 3563596 kB' 'KernelStack: 12888 kB' 'PageTables: 3364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167952 kB' 'Slab: 575316 kB' 'SReclaimable: 167952 kB' 'SUnreclaim: 407364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.478 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.479 node0=1024 expecting 1024 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.479 00:04:30.479 real 0m8.459s 00:04:30.479 user 0m3.194s 00:04:30.479 sys 0m5.349s 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:30.479 19:56:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:30.479 ************************************ 00:04:30.479 END TEST no_shrink_alloc 00:04:30.479 ************************************ 00:04:30.479 19:56:22 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:30.479 19:56:22 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:30.479 19:56:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:30.479 19:56:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.479 19:56:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.479 19:56:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.479 19:56:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.479 19:56:22 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:30.479 19:56:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.479 19:56:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.479 19:56:22 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.479 19:56:22 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.479 19:56:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:30.479 19:56:22 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:30.479 00:04:30.479 real 0m30.564s 00:04:30.479 user 0m11.607s 00:04:30.479 sys 0m19.250s 00:04:30.479 19:56:22 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:30.479 19:56:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:30.479 ************************************ 00:04:30.479 END TEST hugepages 00:04:30.479 ************************************ 00:04:30.479 19:56:22 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:30.479 19:56:22 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:30.479 19:56:22 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:30.479 19:56:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.479 ************************************ 00:04:30.479 START TEST driver 00:04:30.479 ************************************ 00:04:30.479 19:56:22 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:30.479 * Looking for test storage... 00:04:30.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:30.479 19:56:22 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:30.479 19:56:22 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.479 19:56:22 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.770 19:56:28 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:35.770 19:56:28 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.770 19:56:28 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.770 19:56:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:35.770 ************************************ 00:04:35.770 START TEST guess_driver 00:04:35.770 ************************************ 00:04:35.770 19:56:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:35.770 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:35.770 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:35.770 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:35.770 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:35.771 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:35.771 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:35.771 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:35.771 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:35.771 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:35.771 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:35.771 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:35.771 Looking for driver=vfio-pci 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.771 19:56:28 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.035 19:56:32 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.332 00:04:45.332 real 0m9.613s 00:04:45.332 user 0m2.907s 00:04:45.332 sys 0m5.777s 00:04:45.332 19:56:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.332 19:56:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:45.332 ************************************ 00:04:45.332 END TEST guess_driver 00:04:45.332 ************************************ 00:04:45.594 00:04:45.594 real 0m15.018s 00:04:45.594 user 0m4.446s 00:04:45.594 sys 0m8.724s 00:04:45.594 19:56:37 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.594 19:56:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:45.594 ************************************ 00:04:45.594 END TEST driver 00:04:45.594 ************************************ 00:04:45.594 19:56:37 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:45.594 19:56:37 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.594 19:56:37 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.594 19:56:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:45.594 ************************************ 00:04:45.594 START TEST devices 00:04:45.594 ************************************ 00:04:45.594 19:56:37 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:45.594 * Looking for test storage... 00:04:45.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:45.594 19:56:38 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:45.594 19:56:38 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:45.594 19:56:38 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.594 19:56:38 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:50.886 19:56:42 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:50.886 19:56:42 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:50.886 19:56:42 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:50.886 19:56:42 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:50.886 19:56:42 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:50.886 19:56:42 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:50.886 19:56:42 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:50.886 19:56:42 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:50.886 19:56:42 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:50.886 19:56:42 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:50.886 No valid GPT data, bailing 00:04:50.886 19:56:42 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:50.886 19:56:42 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:50.886 19:56:42 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:50.886 19:56:42 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:50.886 19:56:42 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:50.886 19:56:42 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:50.886 19:56:42 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:50.886 19:56:42 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.886 19:56:42 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.886 19:56:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:50.886 ************************************ 00:04:50.886 START TEST nvme_mount 00:04:50.886 ************************************ 00:04:50.886 19:56:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:04:50.886 19:56:42 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:50.886 19:56:42 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:50.886 19:56:42 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.886 19:56:42 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.886 19:56:42 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:50.887 19:56:42 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:50.887 19:56:42 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:50.887 19:56:42 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:50.887 19:56:42 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:50.887 19:56:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:50.887 19:56:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:50.887 19:56:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:50.887 19:56:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.887 19:56:42 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.887 19:56:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:50.887 19:56:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.887 19:56:42 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:50.887 19:56:42 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:50.887 19:56:42 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:51.456 Creating new GPT entries in memory. 00:04:51.456 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:51.456 other utilities. 00:04:51.456 19:56:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:51.456 19:56:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.456 19:56:43 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:51.456 19:56:43 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:51.456 19:56:43 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:52.398 Creating new GPT entries in memory. 00:04:52.398 The operation has completed successfully. 00:04:52.398 19:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:52.398 19:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.398 19:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3975004 00:04:52.398 19:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.398 19:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:52.398 19:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.659 19:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:56.865 19:56:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.865 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.865 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:56.865 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.865 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.865 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.865 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:56.865 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.865 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.865 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.865 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:56.865 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:56.865 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.865 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:57.126 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:57.126 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:57.126 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:57.126 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.127 19:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.339 19:56:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.339 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.340 19:56:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:04.645 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.645 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.645 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.645 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.645 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.645 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.645 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.645 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.645 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.645 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.645 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.645 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.645 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.645 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.645 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.646 19:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:04.646 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.220 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:05.220 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:05.220 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:05.220 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:05.220 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.220 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.220 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:05.220 19:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:05.220 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:05.220 00:05:05.220 real 0m14.735s 00:05:05.220 user 0m4.478s 00:05:05.220 sys 0m8.082s 00:05:05.220 19:56:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.221 19:56:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:05.221 ************************************ 00:05:05.221 END TEST nvme_mount 00:05:05.221 ************************************ 00:05:05.221 19:56:57 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:05.221 19:56:57 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:05.221 19:56:57 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.221 19:56:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:05.221 ************************************ 00:05:05.221 START TEST dm_mount 00:05:05.221 ************************************ 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:05.221 19:56:57 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:06.164 Creating new GPT entries in memory. 00:05:06.164 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:06.164 other utilities. 00:05:06.164 19:56:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:06.164 19:56:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.164 19:56:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:06.164 19:56:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:06.164 19:56:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:07.550 Creating new GPT entries in memory. 00:05:07.550 The operation has completed successfully. 00:05:07.550 19:56:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:07.550 19:56:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:07.550 19:56:59 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:07.550 19:56:59 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:07.550 19:56:59 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:08.495 The operation has completed successfully. 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3980590 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-1 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.495 19:57:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.708 19:57:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:16.923 19:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.923 19:57:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.923 19:57:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:16.923 19:57:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:16.923 19:57:09 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:16.923 19:57:09 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:16.923 19:57:09 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:16.923 19:57:09 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:16.923 19:57:09 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.923 19:57:09 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:16.923 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:16.923 19:57:09 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:16.923 19:57:09 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:16.923 00:05:16.923 real 0m11.505s 00:05:16.923 user 0m2.996s 00:05:16.923 sys 0m5.525s 00:05:16.923 19:57:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.923 19:57:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:16.923 ************************************ 00:05:16.924 END TEST dm_mount 00:05:16.924 ************************************ 00:05:16.924 19:57:09 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:16.924 19:57:09 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:16.924 19:57:09 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.924 19:57:09 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.924 19:57:09 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:16.924 19:57:09 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.924 19:57:09 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.185 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:17.185 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:17.185 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:17.185 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:17.185 19:57:09 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:17.185 19:57:09 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:17.185 19:57:09 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:17.185 19:57:09 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.185 19:57:09 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:17.185 19:57:09 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.185 19:57:09 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:17.185 00:05:17.185 real 0m31.541s 00:05:17.185 user 0m9.340s 00:05:17.185 sys 0m16.917s 00:05:17.185 19:57:09 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.185 19:57:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:17.185 ************************************ 00:05:17.185 END TEST devices 00:05:17.185 ************************************ 00:05:17.185 00:05:17.185 real 1m46.891s 00:05:17.185 user 0m35.176s 00:05:17.185 sys 1m2.593s 00:05:17.185 19:57:09 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.185 19:57:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.185 ************************************ 00:05:17.185 END TEST setup.sh 00:05:17.185 ************************************ 00:05:17.186 19:57:09 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:21.396 Hugepages 00:05:21.396 node hugesize free / total 00:05:21.396 node0 1048576kB 0 / 0 00:05:21.396 node0 2048kB 2048 / 2048 00:05:21.396 node1 1048576kB 0 / 0 00:05:21.396 node1 2048kB 0 / 0 00:05:21.396 00:05:21.396 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:21.396 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:21.396 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:21.396 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:21.396 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:21.396 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:21.396 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:21.396 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:21.396 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:21.396 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:21.396 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:21.396 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:21.397 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:21.397 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:21.397 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:21.397 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:21.397 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:21.397 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:21.397 19:57:13 -- spdk/autotest.sh@130 -- # uname -s 00:05:21.397 19:57:13 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:21.397 19:57:13 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:21.397 19:57:13 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:25.699 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:25.699 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:25.699 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:25.699 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:25.699 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:25.699 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:25.699 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:25.699 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:25.699 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:25.699 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:25.699 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:25.699 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:25.699 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:25.699 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:25.699 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:25.699 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:27.153 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:27.412 19:57:19 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:28.799 19:57:20 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:28.799 19:57:20 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:28.799 19:57:20 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:28.799 19:57:20 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:28.799 19:57:20 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:28.799 19:57:20 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:28.799 19:57:20 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.799 19:57:20 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:28.799 19:57:20 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:28.799 19:57:20 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:28.799 19:57:20 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:05:28.799 19:57:20 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:33.010 Waiting for block devices as requested 00:05:33.010 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:33.010 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:33.010 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:33.010 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:33.010 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:33.010 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:33.010 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:33.010 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:33.010 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:33.272 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:33.272 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:33.534 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:33.534 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:33.534 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:33.795 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:33.795 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:33.795 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:34.057 19:57:26 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:34.057 19:57:26 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:34.057 19:57:26 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:34.057 19:57:26 -- common/autotest_common.sh@1498 -- # grep 0000:65:00.0/nvme/nvme 00:05:34.057 19:57:26 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:34.057 19:57:26 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:34.057 19:57:26 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:34.057 19:57:26 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:34.058 19:57:26 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:34.058 19:57:26 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:34.058 19:57:26 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:34.058 19:57:26 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:34.058 19:57:26 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:34.058 19:57:26 -- common/autotest_common.sh@1541 -- # oacs=' 0x5f' 00:05:34.058 19:57:26 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:34.058 19:57:26 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:34.058 19:57:26 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:34.058 19:57:26 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:34.058 19:57:26 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:34.058 19:57:26 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:34.058 19:57:26 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:34.058 19:57:26 -- common/autotest_common.sh@1553 -- # continue 00:05:34.058 19:57:26 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:34.058 19:57:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.058 19:57:26 -- common/autotest_common.sh@10 -- # set +x 00:05:34.319 19:57:26 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:34.319 19:57:26 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:34.319 19:57:26 -- common/autotest_common.sh@10 -- # set +x 00:05:34.319 19:57:26 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:38.533 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:38.533 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:38.533 19:57:31 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:38.533 19:57:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.533 19:57:31 -- common/autotest_common.sh@10 -- # set +x 00:05:38.794 19:57:31 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:38.794 19:57:31 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:38.794 19:57:31 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:38.794 19:57:31 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:38.794 19:57:31 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:38.794 19:57:31 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:38.794 19:57:31 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:38.794 19:57:31 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:38.794 19:57:31 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:38.794 19:57:31 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:38.794 19:57:31 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:38.794 19:57:31 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:38.794 19:57:31 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:05:38.794 19:57:31 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:38.794 19:57:31 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:38.794 19:57:31 -- common/autotest_common.sh@1576 -- # device=0xa80a 00:05:38.794 19:57:31 -- common/autotest_common.sh@1577 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:38.794 19:57:31 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:05:38.794 19:57:31 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:05:38.794 19:57:31 -- common/autotest_common.sh@1589 -- # return 0 00:05:38.794 19:57:31 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:38.794 19:57:31 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:38.794 19:57:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:38.794 19:57:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:38.794 19:57:31 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:38.794 19:57:31 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:38.794 19:57:31 -- common/autotest_common.sh@10 -- # set +x 00:05:38.794 19:57:31 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:38.794 19:57:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.794 19:57:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.794 19:57:31 -- common/autotest_common.sh@10 -- # set +x 00:05:38.794 ************************************ 00:05:38.794 START TEST env 00:05:38.794 ************************************ 00:05:38.794 19:57:31 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:38.794 * Looking for test storage... 00:05:38.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:38.794 19:57:31 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:38.794 19:57:31 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.794 19:57:31 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.794 19:57:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.056 ************************************ 00:05:39.056 START TEST env_memory 00:05:39.056 ************************************ 00:05:39.056 19:57:31 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:39.056 00:05:39.056 00:05:39.056 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.056 http://cunit.sourceforge.net/ 00:05:39.056 00:05:39.056 00:05:39.056 Suite: memory 00:05:39.056 Test: alloc and free memory map ...[2024-05-15 19:57:31.390681] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:39.056 passed 00:05:39.056 Test: mem map translation ...[2024-05-15 19:57:31.416367] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:39.056 [2024-05-15 19:57:31.416396] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:39.056 [2024-05-15 19:57:31.416445] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:39.056 [2024-05-15 19:57:31.416452] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:39.056 passed 00:05:39.056 Test: mem map registration ...[2024-05-15 19:57:31.471654] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:39.056 [2024-05-15 19:57:31.471675] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:39.056 passed 00:05:39.056 Test: mem map adjacent registrations ...passed 00:05:39.056 00:05:39.056 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.056 suites 1 1 n/a 0 0 00:05:39.056 tests 4 4 4 0 0 00:05:39.056 asserts 152 152 152 0 n/a 00:05:39.056 00:05:39.056 Elapsed time = 0.195 seconds 00:05:39.056 00:05:39.056 real 0m0.208s 00:05:39.056 user 0m0.198s 00:05:39.056 sys 0m0.009s 00:05:39.056 19:57:31 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.056 19:57:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:39.056 ************************************ 00:05:39.056 END TEST env_memory 00:05:39.056 ************************************ 00:05:39.319 19:57:31 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:39.319 19:57:31 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.319 19:57:31 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.319 19:57:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.319 ************************************ 00:05:39.319 START TEST env_vtophys 00:05:39.319 ************************************ 00:05:39.319 19:57:31 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:39.319 EAL: lib.eal log level changed from notice to debug 00:05:39.319 EAL: Detected lcore 0 as core 0 on socket 0 00:05:39.319 EAL: Detected lcore 1 as core 1 on socket 0 00:05:39.319 EAL: Detected lcore 2 as core 2 on socket 0 00:05:39.319 EAL: Detected lcore 3 as core 3 on socket 0 00:05:39.319 EAL: Detected lcore 4 as core 4 on socket 0 00:05:39.319 EAL: Detected lcore 5 as core 5 on socket 0 00:05:39.319 EAL: Detected lcore 6 as core 6 on socket 0 00:05:39.319 EAL: Detected lcore 7 as core 7 on socket 0 00:05:39.319 EAL: Detected lcore 8 as core 8 on socket 0 00:05:39.319 EAL: Detected lcore 9 as core 9 on socket 0 00:05:39.319 EAL: Detected lcore 10 as core 10 on socket 0 00:05:39.319 EAL: Detected lcore 11 as core 11 on socket 0 00:05:39.319 EAL: Detected lcore 12 as core 12 on socket 0 00:05:39.319 EAL: Detected lcore 13 as core 13 on socket 0 00:05:39.319 EAL: Detected lcore 14 as core 14 on socket 0 00:05:39.319 EAL: Detected lcore 15 as core 15 on socket 0 00:05:39.319 EAL: Detected lcore 16 as core 16 on socket 0 00:05:39.319 EAL: Detected lcore 17 as core 17 on socket 0 00:05:39.319 EAL: Detected lcore 18 as core 18 on socket 0 00:05:39.319 EAL: Detected lcore 19 as core 19 on socket 0 00:05:39.319 EAL: Detected lcore 20 as core 20 on socket 0 00:05:39.319 EAL: Detected lcore 21 as core 21 on socket 0 00:05:39.319 EAL: Detected lcore 22 as core 22 on socket 0 00:05:39.319 EAL: Detected lcore 23 as core 23 on socket 0 00:05:39.319 EAL: Detected lcore 24 as core 24 on socket 0 00:05:39.319 EAL: Detected lcore 25 as core 25 on socket 0 00:05:39.319 EAL: Detected lcore 26 as core 26 on socket 0 00:05:39.319 EAL: Detected lcore 27 as core 27 on socket 0 00:05:39.319 EAL: Detected lcore 28 as core 28 on socket 0 00:05:39.319 EAL: Detected lcore 29 as core 29 on socket 0 00:05:39.319 EAL: Detected lcore 30 as core 30 on socket 0 00:05:39.319 EAL: Detected lcore 31 as core 31 on socket 0 00:05:39.319 EAL: Detected lcore 32 as core 32 on socket 0 00:05:39.319 EAL: Detected lcore 33 as core 33 on socket 0 00:05:39.319 EAL: Detected lcore 34 as core 34 on socket 0 00:05:39.319 EAL: Detected lcore 35 as core 35 on socket 0 00:05:39.319 EAL: Detected lcore 36 as core 0 on socket 1 00:05:39.319 EAL: Detected lcore 37 as core 1 on socket 1 00:05:39.319 EAL: Detected lcore 38 as core 2 on socket 1 00:05:39.319 EAL: Detected lcore 39 as core 3 on socket 1 00:05:39.319 EAL: Detected lcore 40 as core 4 on socket 1 00:05:39.319 EAL: Detected lcore 41 as core 5 on socket 1 00:05:39.319 EAL: Detected lcore 42 as core 6 on socket 1 00:05:39.319 EAL: Detected lcore 43 as core 7 on socket 1 00:05:39.319 EAL: Detected lcore 44 as core 8 on socket 1 00:05:39.319 EAL: Detected lcore 45 as core 9 on socket 1 00:05:39.319 EAL: Detected lcore 46 as core 10 on socket 1 00:05:39.319 EAL: Detected lcore 47 as core 11 on socket 1 00:05:39.319 EAL: Detected lcore 48 as core 12 on socket 1 00:05:39.319 EAL: Detected lcore 49 as core 13 on socket 1 00:05:39.319 EAL: Detected lcore 50 as core 14 on socket 1 00:05:39.319 EAL: Detected lcore 51 as core 15 on socket 1 00:05:39.319 EAL: Detected lcore 52 as core 16 on socket 1 00:05:39.319 EAL: Detected lcore 53 as core 17 on socket 1 00:05:39.319 EAL: Detected lcore 54 as core 18 on socket 1 00:05:39.319 EAL: Detected lcore 55 as core 19 on socket 1 00:05:39.319 EAL: Detected lcore 56 as core 20 on socket 1 00:05:39.319 EAL: Detected lcore 57 as core 21 on socket 1 00:05:39.319 EAL: Detected lcore 58 as core 22 on socket 1 00:05:39.319 EAL: Detected lcore 59 as core 23 on socket 1 00:05:39.320 EAL: Detected lcore 60 as core 24 on socket 1 00:05:39.320 EAL: Detected lcore 61 as core 25 on socket 1 00:05:39.320 EAL: Detected lcore 62 as core 26 on socket 1 00:05:39.320 EAL: Detected lcore 63 as core 27 on socket 1 00:05:39.320 EAL: Detected lcore 64 as core 28 on socket 1 00:05:39.320 EAL: Detected lcore 65 as core 29 on socket 1 00:05:39.320 EAL: Detected lcore 66 as core 30 on socket 1 00:05:39.320 EAL: Detected lcore 67 as core 31 on socket 1 00:05:39.320 EAL: Detected lcore 68 as core 32 on socket 1 00:05:39.320 EAL: Detected lcore 69 as core 33 on socket 1 00:05:39.320 EAL: Detected lcore 70 as core 34 on socket 1 00:05:39.320 EAL: Detected lcore 71 as core 35 on socket 1 00:05:39.320 EAL: Detected lcore 72 as core 0 on socket 0 00:05:39.320 EAL: Detected lcore 73 as core 1 on socket 0 00:05:39.320 EAL: Detected lcore 74 as core 2 on socket 0 00:05:39.320 EAL: Detected lcore 75 as core 3 on socket 0 00:05:39.320 EAL: Detected lcore 76 as core 4 on socket 0 00:05:39.320 EAL: Detected lcore 77 as core 5 on socket 0 00:05:39.320 EAL: Detected lcore 78 as core 6 on socket 0 00:05:39.320 EAL: Detected lcore 79 as core 7 on socket 0 00:05:39.320 EAL: Detected lcore 80 as core 8 on socket 0 00:05:39.320 EAL: Detected lcore 81 as core 9 on socket 0 00:05:39.320 EAL: Detected lcore 82 as core 10 on socket 0 00:05:39.320 EAL: Detected lcore 83 as core 11 on socket 0 00:05:39.320 EAL: Detected lcore 84 as core 12 on socket 0 00:05:39.320 EAL: Detected lcore 85 as core 13 on socket 0 00:05:39.320 EAL: Detected lcore 86 as core 14 on socket 0 00:05:39.320 EAL: Detected lcore 87 as core 15 on socket 0 00:05:39.320 EAL: Detected lcore 88 as core 16 on socket 0 00:05:39.320 EAL: Detected lcore 89 as core 17 on socket 0 00:05:39.320 EAL: Detected lcore 90 as core 18 on socket 0 00:05:39.320 EAL: Detected lcore 91 as core 19 on socket 0 00:05:39.320 EAL: Detected lcore 92 as core 20 on socket 0 00:05:39.320 EAL: Detected lcore 93 as core 21 on socket 0 00:05:39.320 EAL: Detected lcore 94 as core 22 on socket 0 00:05:39.320 EAL: Detected lcore 95 as core 23 on socket 0 00:05:39.320 EAL: Detected lcore 96 as core 24 on socket 0 00:05:39.320 EAL: Detected lcore 97 as core 25 on socket 0 00:05:39.320 EAL: Detected lcore 98 as core 26 on socket 0 00:05:39.320 EAL: Detected lcore 99 as core 27 on socket 0 00:05:39.320 EAL: Detected lcore 100 as core 28 on socket 0 00:05:39.320 EAL: Detected lcore 101 as core 29 on socket 0 00:05:39.320 EAL: Detected lcore 102 as core 30 on socket 0 00:05:39.320 EAL: Detected lcore 103 as core 31 on socket 0 00:05:39.320 EAL: Detected lcore 104 as core 32 on socket 0 00:05:39.320 EAL: Detected lcore 105 as core 33 on socket 0 00:05:39.320 EAL: Detected lcore 106 as core 34 on socket 0 00:05:39.320 EAL: Detected lcore 107 as core 35 on socket 0 00:05:39.320 EAL: Detected lcore 108 as core 0 on socket 1 00:05:39.320 EAL: Detected lcore 109 as core 1 on socket 1 00:05:39.320 EAL: Detected lcore 110 as core 2 on socket 1 00:05:39.320 EAL: Detected lcore 111 as core 3 on socket 1 00:05:39.320 EAL: Detected lcore 112 as core 4 on socket 1 00:05:39.320 EAL: Detected lcore 113 as core 5 on socket 1 00:05:39.320 EAL: Detected lcore 114 as core 6 on socket 1 00:05:39.320 EAL: Detected lcore 115 as core 7 on socket 1 00:05:39.320 EAL: Detected lcore 116 as core 8 on socket 1 00:05:39.320 EAL: Detected lcore 117 as core 9 on socket 1 00:05:39.320 EAL: Detected lcore 118 as core 10 on socket 1 00:05:39.320 EAL: Detected lcore 119 as core 11 on socket 1 00:05:39.320 EAL: Detected lcore 120 as core 12 on socket 1 00:05:39.320 EAL: Detected lcore 121 as core 13 on socket 1 00:05:39.320 EAL: Detected lcore 122 as core 14 on socket 1 00:05:39.320 EAL: Detected lcore 123 as core 15 on socket 1 00:05:39.320 EAL: Detected lcore 124 as core 16 on socket 1 00:05:39.320 EAL: Detected lcore 125 as core 17 on socket 1 00:05:39.320 EAL: Detected lcore 126 as core 18 on socket 1 00:05:39.320 EAL: Detected lcore 127 as core 19 on socket 1 00:05:39.320 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:39.320 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:39.320 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:39.320 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:39.320 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:39.320 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:39.320 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:39.320 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:39.320 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:39.320 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:39.320 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:39.320 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:39.320 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:39.320 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:39.320 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:39.320 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:39.320 EAL: Maximum logical cores by configuration: 128 00:05:39.320 EAL: Detected CPU lcores: 128 00:05:39.320 EAL: Detected NUMA nodes: 2 00:05:39.320 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:39.320 EAL: Detected shared linkage of DPDK 00:05:39.320 EAL: No shared files mode enabled, IPC will be disabled 00:05:39.320 EAL: Bus pci wants IOVA as 'DC' 00:05:39.320 EAL: Buses did not request a specific IOVA mode. 00:05:39.320 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:39.320 EAL: Selected IOVA mode 'VA' 00:05:39.320 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.320 EAL: Probing VFIO support... 00:05:39.320 EAL: IOMMU type 1 (Type 1) is supported 00:05:39.320 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:39.320 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:39.320 EAL: VFIO support initialized 00:05:39.320 EAL: Ask a virtual area of 0x2e000 bytes 00:05:39.320 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:39.320 EAL: Setting up physically contiguous memory... 00:05:39.320 EAL: Setting maximum number of open files to 524288 00:05:39.320 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:39.320 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:39.320 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:39.320 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.320 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:39.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.320 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.320 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:39.320 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:39.320 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.320 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:39.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.320 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.320 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:39.320 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:39.320 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.320 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:39.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.320 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.320 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:39.320 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:39.320 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.320 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:39.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.320 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.320 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:39.320 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:39.320 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:39.320 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.320 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:39.320 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:39.320 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.320 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:39.320 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:39.320 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.320 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:39.320 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:39.320 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.320 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:39.320 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:39.320 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.320 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:39.320 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:39.320 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.320 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:39.320 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:39.320 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.320 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:39.320 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:39.320 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.320 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:39.320 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:39.320 EAL: Hugepages will be freed exactly as allocated. 00:05:39.320 EAL: No shared files mode enabled, IPC is disabled 00:05:39.320 EAL: No shared files mode enabled, IPC is disabled 00:05:39.320 EAL: TSC frequency is ~2400000 KHz 00:05:39.320 EAL: Main lcore 0 is ready (tid=7f12a251fa00;cpuset=[0]) 00:05:39.320 EAL: Trying to obtain current memory policy. 00:05:39.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.320 EAL: Restoring previous memory policy: 0 00:05:39.320 EAL: request: mp_malloc_sync 00:05:39.320 EAL: No shared files mode enabled, IPC is disabled 00:05:39.320 EAL: Heap on socket 0 was expanded by 2MB 00:05:39.320 EAL: No shared files mode enabled, IPC is disabled 00:05:39.320 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:39.320 EAL: Mem event callback 'spdk:(nil)' registered 00:05:39.320 00:05:39.320 00:05:39.320 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.321 http://cunit.sourceforge.net/ 00:05:39.321 00:05:39.321 00:05:39.321 Suite: components_suite 00:05:39.321 Test: vtophys_malloc_test ...passed 00:05:39.321 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:39.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.321 EAL: Restoring previous memory policy: 4 00:05:39.321 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.321 EAL: request: mp_malloc_sync 00:05:39.321 EAL: No shared files mode enabled, IPC is disabled 00:05:39.321 EAL: Heap on socket 0 was expanded by 4MB 00:05:39.321 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.321 EAL: request: mp_malloc_sync 00:05:39.321 EAL: No shared files mode enabled, IPC is disabled 00:05:39.321 EAL: Heap on socket 0 was shrunk by 4MB 00:05:39.321 EAL: Trying to obtain current memory policy. 00:05:39.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.321 EAL: Restoring previous memory policy: 4 00:05:39.321 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.321 EAL: request: mp_malloc_sync 00:05:39.321 EAL: No shared files mode enabled, IPC is disabled 00:05:39.321 EAL: Heap on socket 0 was expanded by 6MB 00:05:39.321 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.321 EAL: request: mp_malloc_sync 00:05:39.321 EAL: No shared files mode enabled, IPC is disabled 00:05:39.321 EAL: Heap on socket 0 was shrunk by 6MB 00:05:39.321 EAL: Trying to obtain current memory policy. 00:05:39.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.321 EAL: Restoring previous memory policy: 4 00:05:39.321 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.321 EAL: request: mp_malloc_sync 00:05:39.321 EAL: No shared files mode enabled, IPC is disabled 00:05:39.321 EAL: Heap on socket 0 was expanded by 10MB 00:05:39.321 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.321 EAL: request: mp_malloc_sync 00:05:39.321 EAL: No shared files mode enabled, IPC is disabled 00:05:39.321 EAL: Heap on socket 0 was shrunk by 10MB 00:05:39.321 EAL: Trying to obtain current memory policy. 00:05:39.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.321 EAL: Restoring previous memory policy: 4 00:05:39.321 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.321 EAL: request: mp_malloc_sync 00:05:39.321 EAL: No shared files mode enabled, IPC is disabled 00:05:39.321 EAL: Heap on socket 0 was expanded by 18MB 00:05:39.321 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.321 EAL: request: mp_malloc_sync 00:05:39.321 EAL: No shared files mode enabled, IPC is disabled 00:05:39.321 EAL: Heap on socket 0 was shrunk by 18MB 00:05:39.321 EAL: Trying to obtain current memory policy. 00:05:39.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.321 EAL: Restoring previous memory policy: 4 00:05:39.321 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.321 EAL: request: mp_malloc_sync 00:05:39.321 EAL: No shared files mode enabled, IPC is disabled 00:05:39.321 EAL: Heap on socket 0 was expanded by 34MB 00:05:39.321 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.321 EAL: request: mp_malloc_sync 00:05:39.321 EAL: No shared files mode enabled, IPC is disabled 00:05:39.321 EAL: Heap on socket 0 was shrunk by 34MB 00:05:39.321 EAL: Trying to obtain current memory policy. 00:05:39.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.321 EAL: Restoring previous memory policy: 4 00:05:39.321 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.321 EAL: request: mp_malloc_sync 00:05:39.321 EAL: No shared files mode enabled, IPC is disabled 00:05:39.321 EAL: Heap on socket 0 was expanded by 66MB 00:05:39.321 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.321 EAL: request: mp_malloc_sync 00:05:39.321 EAL: No shared files mode enabled, IPC is disabled 00:05:39.321 EAL: Heap on socket 0 was shrunk by 66MB 00:05:39.321 EAL: Trying to obtain current memory policy. 00:05:39.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.321 EAL: Restoring previous memory policy: 4 00:05:39.321 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.321 EAL: request: mp_malloc_sync 00:05:39.321 EAL: No shared files mode enabled, IPC is disabled 00:05:39.321 EAL: Heap on socket 0 was expanded by 130MB 00:05:39.581 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.581 EAL: request: mp_malloc_sync 00:05:39.581 EAL: No shared files mode enabled, IPC is disabled 00:05:39.582 EAL: Heap on socket 0 was shrunk by 130MB 00:05:39.582 EAL: Trying to obtain current memory policy. 00:05:39.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.582 EAL: Restoring previous memory policy: 4 00:05:39.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.582 EAL: request: mp_malloc_sync 00:05:39.582 EAL: No shared files mode enabled, IPC is disabled 00:05:39.582 EAL: Heap on socket 0 was expanded by 258MB 00:05:39.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.582 EAL: request: mp_malloc_sync 00:05:39.582 EAL: No shared files mode enabled, IPC is disabled 00:05:39.582 EAL: Heap on socket 0 was shrunk by 258MB 00:05:39.582 EAL: Trying to obtain current memory policy. 00:05:39.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.582 EAL: Restoring previous memory policy: 4 00:05:39.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.582 EAL: request: mp_malloc_sync 00:05:39.582 EAL: No shared files mode enabled, IPC is disabled 00:05:39.582 EAL: Heap on socket 0 was expanded by 514MB 00:05:39.582 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.841 EAL: request: mp_malloc_sync 00:05:39.841 EAL: No shared files mode enabled, IPC is disabled 00:05:39.841 EAL: Heap on socket 0 was shrunk by 514MB 00:05:39.841 EAL: Trying to obtain current memory policy. 00:05:39.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.841 EAL: Restoring previous memory policy: 4 00:05:39.841 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.841 EAL: request: mp_malloc_sync 00:05:39.841 EAL: No shared files mode enabled, IPC is disabled 00:05:39.841 EAL: Heap on socket 0 was expanded by 1026MB 00:05:40.101 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.101 EAL: request: mp_malloc_sync 00:05:40.101 EAL: No shared files mode enabled, IPC is disabled 00:05:40.101 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:40.101 passed 00:05:40.101 00:05:40.101 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.101 suites 1 1 n/a 0 0 00:05:40.101 tests 2 2 2 0 0 00:05:40.101 asserts 497 497 497 0 n/a 00:05:40.101 00:05:40.101 Elapsed time = 0.685 seconds 00:05:40.101 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.101 EAL: request: mp_malloc_sync 00:05:40.101 EAL: No shared files mode enabled, IPC is disabled 00:05:40.101 EAL: Heap on socket 0 was shrunk by 2MB 00:05:40.101 EAL: No shared files mode enabled, IPC is disabled 00:05:40.101 EAL: No shared files mode enabled, IPC is disabled 00:05:40.101 EAL: No shared files mode enabled, IPC is disabled 00:05:40.101 00:05:40.101 real 0m0.839s 00:05:40.101 user 0m0.434s 00:05:40.102 sys 0m0.366s 00:05:40.102 19:57:32 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.102 19:57:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:40.102 ************************************ 00:05:40.102 END TEST env_vtophys 00:05:40.102 ************************************ 00:05:40.102 19:57:32 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:40.102 19:57:32 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.102 19:57:32 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.102 19:57:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.102 ************************************ 00:05:40.102 START TEST env_pci 00:05:40.102 ************************************ 00:05:40.102 19:57:32 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:40.102 00:05:40.102 00:05:40.102 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.102 http://cunit.sourceforge.net/ 00:05:40.102 00:05:40.102 00:05:40.102 Suite: pci 00:05:40.102 Test: pci_hook ...[2024-05-15 19:57:32.569531] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3993917 has claimed it 00:05:40.362 EAL: Cannot find device (10000:00:01.0) 00:05:40.362 EAL: Failed to attach device on primary process 00:05:40.362 passed 00:05:40.362 00:05:40.362 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.362 suites 1 1 n/a 0 0 00:05:40.362 tests 1 1 1 0 0 00:05:40.362 asserts 25 25 25 0 n/a 00:05:40.362 00:05:40.362 Elapsed time = 0.034 seconds 00:05:40.362 00:05:40.362 real 0m0.055s 00:05:40.362 user 0m0.017s 00:05:40.362 sys 0m0.038s 00:05:40.362 19:57:32 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.362 19:57:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:40.362 ************************************ 00:05:40.362 END TEST env_pci 00:05:40.362 ************************************ 00:05:40.362 19:57:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:40.362 19:57:32 env -- env/env.sh@15 -- # uname 00:05:40.362 19:57:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:40.362 19:57:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:40.362 19:57:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.362 19:57:32 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:40.362 19:57:32 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.362 19:57:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.362 ************************************ 00:05:40.362 START TEST env_dpdk_post_init 00:05:40.362 ************************************ 00:05:40.362 19:57:32 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.362 EAL: Detected CPU lcores: 128 00:05:40.362 EAL: Detected NUMA nodes: 2 00:05:40.362 EAL: Detected shared linkage of DPDK 00:05:40.362 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.362 EAL: Selected IOVA mode 'VA' 00:05:40.362 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.362 EAL: VFIO support initialized 00:05:40.362 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.362 EAL: Using IOMMU type 1 (Type 1) 00:05:40.623 EAL: Ignore mapping IO port bar(1) 00:05:40.623 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:40.885 EAL: Ignore mapping IO port bar(1) 00:05:40.885 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:41.145 EAL: Ignore mapping IO port bar(1) 00:05:41.145 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:41.145 EAL: Ignore mapping IO port bar(1) 00:05:41.407 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:41.407 EAL: Ignore mapping IO port bar(1) 00:05:41.668 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:41.668 EAL: Ignore mapping IO port bar(1) 00:05:41.929 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:41.929 EAL: Ignore mapping IO port bar(1) 00:05:41.929 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:42.190 EAL: Ignore mapping IO port bar(1) 00:05:42.190 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:42.451 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:42.712 EAL: Ignore mapping IO port bar(1) 00:05:42.712 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:42.712 EAL: Ignore mapping IO port bar(1) 00:05:42.972 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:42.972 EAL: Ignore mapping IO port bar(1) 00:05:43.233 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:43.233 EAL: Ignore mapping IO port bar(1) 00:05:43.494 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:43.494 EAL: Ignore mapping IO port bar(1) 00:05:43.494 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:43.756 EAL: Ignore mapping IO port bar(1) 00:05:43.756 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:44.017 EAL: Ignore mapping IO port bar(1) 00:05:44.017 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:44.278 EAL: Ignore mapping IO port bar(1) 00:05:44.278 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:44.278 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:44.278 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:44.278 Starting DPDK initialization... 00:05:44.278 Starting SPDK post initialization... 00:05:44.278 SPDK NVMe probe 00:05:44.278 Attaching to 0000:65:00.0 00:05:44.278 Attached to 0000:65:00.0 00:05:44.278 Cleaning up... 00:05:46.198 00:05:46.198 real 0m5.743s 00:05:46.198 user 0m0.201s 00:05:46.198 sys 0m0.095s 00:05:46.198 19:57:38 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.198 19:57:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.198 ************************************ 00:05:46.198 END TEST env_dpdk_post_init 00:05:46.198 ************************************ 00:05:46.198 19:57:38 env -- env/env.sh@26 -- # uname 00:05:46.198 19:57:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:46.198 19:57:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.198 19:57:38 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:46.198 19:57:38 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.198 19:57:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.198 ************************************ 00:05:46.198 START TEST env_mem_callbacks 00:05:46.198 ************************************ 00:05:46.198 19:57:38 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.198 EAL: Detected CPU lcores: 128 00:05:46.198 EAL: Detected NUMA nodes: 2 00:05:46.198 EAL: Detected shared linkage of DPDK 00:05:46.198 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:46.198 EAL: Selected IOVA mode 'VA' 00:05:46.198 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.198 EAL: VFIO support initialized 00:05:46.198 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:46.198 00:05:46.198 00:05:46.198 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.198 http://cunit.sourceforge.net/ 00:05:46.198 00:05:46.198 00:05:46.198 Suite: memory 00:05:46.198 Test: test ... 00:05:46.198 register 0x200000200000 2097152 00:05:46.198 malloc 3145728 00:05:46.198 register 0x200000400000 4194304 00:05:46.198 buf 0x200000500000 len 3145728 PASSED 00:05:46.198 malloc 64 00:05:46.198 buf 0x2000004fff40 len 64 PASSED 00:05:46.198 malloc 4194304 00:05:46.198 register 0x200000800000 6291456 00:05:46.198 buf 0x200000a00000 len 4194304 PASSED 00:05:46.198 free 0x200000500000 3145728 00:05:46.198 free 0x2000004fff40 64 00:05:46.198 unregister 0x200000400000 4194304 PASSED 00:05:46.198 free 0x200000a00000 4194304 00:05:46.198 unregister 0x200000800000 6291456 PASSED 00:05:46.198 malloc 8388608 00:05:46.198 register 0x200000400000 10485760 00:05:46.198 buf 0x200000600000 len 8388608 PASSED 00:05:46.198 free 0x200000600000 8388608 00:05:46.198 unregister 0x200000400000 10485760 PASSED 00:05:46.198 passed 00:05:46.198 00:05:46.198 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.198 suites 1 1 n/a 0 0 00:05:46.198 tests 1 1 1 0 0 00:05:46.198 asserts 15 15 15 0 n/a 00:05:46.198 00:05:46.198 Elapsed time = 0.010 seconds 00:05:46.198 00:05:46.198 real 0m0.074s 00:05:46.198 user 0m0.023s 00:05:46.198 sys 0m0.050s 00:05:46.198 19:57:38 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.199 19:57:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:46.199 ************************************ 00:05:46.199 END TEST env_mem_callbacks 00:05:46.199 ************************************ 00:05:46.199 00:05:46.199 real 0m7.452s 00:05:46.199 user 0m1.081s 00:05:46.199 sys 0m0.895s 00:05:46.199 19:57:38 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.199 19:57:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.199 ************************************ 00:05:46.199 END TEST env 00:05:46.199 ************************************ 00:05:46.199 19:57:38 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:46.199 19:57:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:46.199 19:57:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.199 19:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:46.460 ************************************ 00:05:46.460 START TEST rpc 00:05:46.460 ************************************ 00:05:46.460 19:57:38 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:46.460 * Looking for test storage... 00:05:46.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:46.460 19:57:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3995176 00:05:46.460 19:57:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.460 19:57:38 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:46.460 19:57:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3995176 00:05:46.460 19:57:38 rpc -- common/autotest_common.sh@827 -- # '[' -z 3995176 ']' 00:05:46.460 19:57:38 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.460 19:57:38 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:46.460 19:57:38 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.460 19:57:38 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:46.460 19:57:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.460 [2024-05-15 19:57:38.896226] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:05:46.460 [2024-05-15 19:57:38.896291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3995176 ] 00:05:46.460 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.721 [2024-05-15 19:57:38.988515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.721 [2024-05-15 19:57:39.087131] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:46.721 [2024-05-15 19:57:39.087190] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3995176' to capture a snapshot of events at runtime. 00:05:46.721 [2024-05-15 19:57:39.087199] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:46.722 [2024-05-15 19:57:39.087206] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:46.722 [2024-05-15 19:57:39.087212] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3995176 for offline analysis/debug. 00:05:46.722 [2024-05-15 19:57:39.087242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.293 19:57:39 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:47.293 19:57:39 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:47.293 19:57:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:47.293 19:57:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:47.293 19:57:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:47.293 19:57:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:47.293 19:57:39 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.293 19:57:39 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.293 19:57:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.554 ************************************ 00:05:47.554 START TEST rpc_integrity 00:05:47.554 ************************************ 00:05:47.554 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:47.554 19:57:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.554 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.554 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.554 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.554 19:57:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.554 19:57:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:47.554 19:57:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:47.554 19:57:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.554 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.554 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.554 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.554 19:57:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:47.554 19:57:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:47.554 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.554 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.554 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.555 19:57:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:47.555 { 00:05:47.555 "name": "Malloc0", 00:05:47.555 "aliases": [ 00:05:47.555 "85580267-9b4b-409a-bca9-1b81002f3003" 00:05:47.555 ], 00:05:47.555 "product_name": "Malloc disk", 00:05:47.555 "block_size": 512, 00:05:47.555 "num_blocks": 16384, 00:05:47.555 "uuid": "85580267-9b4b-409a-bca9-1b81002f3003", 00:05:47.555 "assigned_rate_limits": { 00:05:47.555 "rw_ios_per_sec": 0, 00:05:47.555 "rw_mbytes_per_sec": 0, 00:05:47.555 "r_mbytes_per_sec": 0, 00:05:47.555 "w_mbytes_per_sec": 0 00:05:47.555 }, 00:05:47.555 "claimed": false, 00:05:47.555 "zoned": false, 00:05:47.555 "supported_io_types": { 00:05:47.555 "read": true, 00:05:47.555 "write": true, 00:05:47.555 "unmap": true, 00:05:47.555 "write_zeroes": true, 00:05:47.555 "flush": true, 00:05:47.555 "reset": true, 00:05:47.555 "compare": false, 00:05:47.555 "compare_and_write": false, 00:05:47.555 "abort": true, 00:05:47.555 "nvme_admin": false, 00:05:47.555 "nvme_io": false 00:05:47.555 }, 00:05:47.555 "memory_domains": [ 00:05:47.555 { 00:05:47.555 "dma_device_id": "system", 00:05:47.555 "dma_device_type": 1 00:05:47.555 }, 00:05:47.555 { 00:05:47.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.555 "dma_device_type": 2 00:05:47.555 } 00:05:47.555 ], 00:05:47.555 "driver_specific": {} 00:05:47.555 } 00:05:47.555 ]' 00:05:47.555 19:57:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:47.555 19:57:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:47.555 19:57:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:47.555 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.555 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.555 [2024-05-15 19:57:39.956842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:47.555 [2024-05-15 19:57:39.956894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:47.555 [2024-05-15 19:57:39.956910] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8733b0 00:05:47.555 [2024-05-15 19:57:39.956918] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:47.555 [2024-05-15 19:57:39.958469] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:47.555 [2024-05-15 19:57:39.958508] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:47.555 Passthru0 00:05:47.555 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.555 19:57:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:47.555 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.555 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.555 19:57:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.555 19:57:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.555 { 00:05:47.555 "name": "Malloc0", 00:05:47.555 "aliases": [ 00:05:47.555 "85580267-9b4b-409a-bca9-1b81002f3003" 00:05:47.555 ], 00:05:47.555 "product_name": "Malloc disk", 00:05:47.555 "block_size": 512, 00:05:47.555 "num_blocks": 16384, 00:05:47.555 "uuid": "85580267-9b4b-409a-bca9-1b81002f3003", 00:05:47.555 "assigned_rate_limits": { 00:05:47.555 "rw_ios_per_sec": 0, 00:05:47.555 "rw_mbytes_per_sec": 0, 00:05:47.555 "r_mbytes_per_sec": 0, 00:05:47.555 "w_mbytes_per_sec": 0 00:05:47.555 }, 00:05:47.555 "claimed": true, 00:05:47.555 "claim_type": "exclusive_write", 00:05:47.555 "zoned": false, 00:05:47.555 "supported_io_types": { 00:05:47.555 "read": true, 00:05:47.555 "write": true, 00:05:47.555 "unmap": true, 00:05:47.555 "write_zeroes": true, 00:05:47.555 "flush": true, 00:05:47.555 "reset": true, 00:05:47.555 "compare": false, 00:05:47.555 "compare_and_write": false, 00:05:47.555 "abort": true, 00:05:47.555 "nvme_admin": false, 00:05:47.555 "nvme_io": false 00:05:47.555 }, 00:05:47.555 "memory_domains": [ 00:05:47.555 { 00:05:47.555 "dma_device_id": "system", 00:05:47.555 "dma_device_type": 1 00:05:47.555 }, 00:05:47.555 { 00:05:47.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.555 "dma_device_type": 2 00:05:47.555 } 00:05:47.555 ], 00:05:47.555 "driver_specific": {} 00:05:47.555 }, 00:05:47.555 { 00:05:47.555 "name": "Passthru0", 00:05:47.555 "aliases": [ 00:05:47.555 "42f77326-e58d-5095-bcbe-81d2d6551166" 00:05:47.555 ], 00:05:47.555 "product_name": "passthru", 00:05:47.555 "block_size": 512, 00:05:47.555 "num_blocks": 16384, 00:05:47.555 "uuid": "42f77326-e58d-5095-bcbe-81d2d6551166", 00:05:47.555 "assigned_rate_limits": { 00:05:47.555 "rw_ios_per_sec": 0, 00:05:47.555 "rw_mbytes_per_sec": 0, 00:05:47.555 "r_mbytes_per_sec": 0, 00:05:47.555 "w_mbytes_per_sec": 0 00:05:47.555 }, 00:05:47.555 "claimed": false, 00:05:47.555 "zoned": false, 00:05:47.555 "supported_io_types": { 00:05:47.555 "read": true, 00:05:47.555 "write": true, 00:05:47.555 "unmap": true, 00:05:47.555 "write_zeroes": true, 00:05:47.555 "flush": true, 00:05:47.555 "reset": true, 00:05:47.555 "compare": false, 00:05:47.555 "compare_and_write": false, 00:05:47.555 "abort": true, 00:05:47.555 "nvme_admin": false, 00:05:47.555 "nvme_io": false 00:05:47.555 }, 00:05:47.555 "memory_domains": [ 00:05:47.555 { 00:05:47.555 "dma_device_id": "system", 00:05:47.555 "dma_device_type": 1 00:05:47.555 }, 00:05:47.555 { 00:05:47.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.555 "dma_device_type": 2 00:05:47.555 } 00:05:47.555 ], 00:05:47.555 "driver_specific": { 00:05:47.555 "passthru": { 00:05:47.555 "name": "Passthru0", 00:05:47.555 "base_bdev_name": "Malloc0" 00:05:47.555 } 00:05:47.555 } 00:05:47.555 } 00:05:47.555 ]' 00:05:47.555 19:57:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:47.555 19:57:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:47.555 19:57:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:47.555 19:57:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.555 19:57:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.555 19:57:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.555 19:57:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:47.555 19:57:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.555 19:57:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.555 19:57:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.555 19:57:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:47.555 19:57:40 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.555 19:57:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.817 19:57:40 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.817 19:57:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:47.817 19:57:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:47.817 19:57:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:47.817 00:05:47.817 real 0m0.289s 00:05:47.817 user 0m0.182s 00:05:47.817 sys 0m0.044s 00:05:47.817 19:57:40 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.817 19:57:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.817 ************************************ 00:05:47.817 END TEST rpc_integrity 00:05:47.817 ************************************ 00:05:47.817 19:57:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:47.817 19:57:40 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:47.817 19:57:40 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.817 19:57:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.817 ************************************ 00:05:47.817 START TEST rpc_plugins 00:05:47.817 ************************************ 00:05:47.817 19:57:40 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:47.817 19:57:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:47.817 19:57:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.817 19:57:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.817 19:57:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.817 19:57:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:47.817 19:57:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:47.817 19:57:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.817 19:57:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.817 19:57:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.817 19:57:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:47.817 { 00:05:47.817 "name": "Malloc1", 00:05:47.817 "aliases": [ 00:05:47.817 "d9f05cd5-6c9e-4c5f-977a-b2e919fc40b3" 00:05:47.817 ], 00:05:47.817 "product_name": "Malloc disk", 00:05:47.817 "block_size": 4096, 00:05:47.817 "num_blocks": 256, 00:05:47.817 "uuid": "d9f05cd5-6c9e-4c5f-977a-b2e919fc40b3", 00:05:47.817 "assigned_rate_limits": { 00:05:47.817 "rw_ios_per_sec": 0, 00:05:47.817 "rw_mbytes_per_sec": 0, 00:05:47.817 "r_mbytes_per_sec": 0, 00:05:47.817 "w_mbytes_per_sec": 0 00:05:47.817 }, 00:05:47.817 "claimed": false, 00:05:47.817 "zoned": false, 00:05:47.817 "supported_io_types": { 00:05:47.817 "read": true, 00:05:47.817 "write": true, 00:05:47.817 "unmap": true, 00:05:47.817 "write_zeroes": true, 00:05:47.817 "flush": true, 00:05:47.817 "reset": true, 00:05:47.817 "compare": false, 00:05:47.817 "compare_and_write": false, 00:05:47.817 "abort": true, 00:05:47.817 "nvme_admin": false, 00:05:47.817 "nvme_io": false 00:05:47.817 }, 00:05:47.817 "memory_domains": [ 00:05:47.817 { 00:05:47.817 "dma_device_id": "system", 00:05:47.817 "dma_device_type": 1 00:05:47.817 }, 00:05:47.817 { 00:05:47.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.817 "dma_device_type": 2 00:05:47.817 } 00:05:47.817 ], 00:05:47.817 "driver_specific": {} 00:05:47.817 } 00:05:47.817 ]' 00:05:47.817 19:57:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:47.817 19:57:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:47.817 19:57:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:47.817 19:57:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.817 19:57:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.817 19:57:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.817 19:57:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:47.817 19:57:40 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.817 19:57:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:47.817 19:57:40 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.817 19:57:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:47.817 19:57:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:48.079 19:57:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:48.079 00:05:48.079 real 0m0.150s 00:05:48.079 user 0m0.096s 00:05:48.079 sys 0m0.018s 00:05:48.079 19:57:40 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.079 19:57:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.079 ************************************ 00:05:48.079 END TEST rpc_plugins 00:05:48.079 ************************************ 00:05:48.079 19:57:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:48.079 19:57:40 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.079 19:57:40 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.079 19:57:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.079 ************************************ 00:05:48.079 START TEST rpc_trace_cmd_test 00:05:48.079 ************************************ 00:05:48.079 19:57:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:48.079 19:57:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:48.079 19:57:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:48.079 19:57:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.079 19:57:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.079 19:57:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.079 19:57:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:48.079 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3995176", 00:05:48.079 "tpoint_group_mask": "0x8", 00:05:48.079 "iscsi_conn": { 00:05:48.079 "mask": "0x2", 00:05:48.079 "tpoint_mask": "0x0" 00:05:48.079 }, 00:05:48.079 "scsi": { 00:05:48.079 "mask": "0x4", 00:05:48.079 "tpoint_mask": "0x0" 00:05:48.079 }, 00:05:48.079 "bdev": { 00:05:48.079 "mask": "0x8", 00:05:48.079 "tpoint_mask": "0xffffffffffffffff" 00:05:48.079 }, 00:05:48.079 "nvmf_rdma": { 00:05:48.079 "mask": "0x10", 00:05:48.079 "tpoint_mask": "0x0" 00:05:48.079 }, 00:05:48.079 "nvmf_tcp": { 00:05:48.079 "mask": "0x20", 00:05:48.079 "tpoint_mask": "0x0" 00:05:48.079 }, 00:05:48.079 "ftl": { 00:05:48.079 "mask": "0x40", 00:05:48.079 "tpoint_mask": "0x0" 00:05:48.079 }, 00:05:48.079 "blobfs": { 00:05:48.079 "mask": "0x80", 00:05:48.079 "tpoint_mask": "0x0" 00:05:48.079 }, 00:05:48.079 "dsa": { 00:05:48.079 "mask": "0x200", 00:05:48.079 "tpoint_mask": "0x0" 00:05:48.079 }, 00:05:48.079 "thread": { 00:05:48.079 "mask": "0x400", 00:05:48.079 "tpoint_mask": "0x0" 00:05:48.079 }, 00:05:48.079 "nvme_pcie": { 00:05:48.079 "mask": "0x800", 00:05:48.079 "tpoint_mask": "0x0" 00:05:48.079 }, 00:05:48.079 "iaa": { 00:05:48.079 "mask": "0x1000", 00:05:48.079 "tpoint_mask": "0x0" 00:05:48.079 }, 00:05:48.079 "nvme_tcp": { 00:05:48.079 "mask": "0x2000", 00:05:48.079 "tpoint_mask": "0x0" 00:05:48.079 }, 00:05:48.079 "bdev_nvme": { 00:05:48.079 "mask": "0x4000", 00:05:48.079 "tpoint_mask": "0x0" 00:05:48.079 }, 00:05:48.079 "sock": { 00:05:48.079 "mask": "0x8000", 00:05:48.079 "tpoint_mask": "0x0" 00:05:48.079 } 00:05:48.079 }' 00:05:48.079 19:57:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:48.079 19:57:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:48.079 19:57:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:48.079 19:57:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:48.079 19:57:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:48.340 19:57:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:48.340 19:57:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:48.340 19:57:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:48.340 19:57:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:48.340 19:57:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:48.340 00:05:48.340 real 0m0.233s 00:05:48.340 user 0m0.191s 00:05:48.340 sys 0m0.034s 00:05:48.341 19:57:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.341 19:57:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:48.341 ************************************ 00:05:48.341 END TEST rpc_trace_cmd_test 00:05:48.341 ************************************ 00:05:48.341 19:57:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:48.341 19:57:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:48.341 19:57:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:48.341 19:57:40 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.341 19:57:40 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.341 19:57:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.341 ************************************ 00:05:48.341 START TEST rpc_daemon_integrity 00:05:48.341 ************************************ 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.341 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.602 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.602 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.602 { 00:05:48.602 "name": "Malloc2", 00:05:48.602 "aliases": [ 00:05:48.602 "0084511b-8afe-4bdd-9d32-d1458b034fdb" 00:05:48.602 ], 00:05:48.602 "product_name": "Malloc disk", 00:05:48.602 "block_size": 512, 00:05:48.602 "num_blocks": 16384, 00:05:48.602 "uuid": "0084511b-8afe-4bdd-9d32-d1458b034fdb", 00:05:48.602 "assigned_rate_limits": { 00:05:48.602 "rw_ios_per_sec": 0, 00:05:48.602 "rw_mbytes_per_sec": 0, 00:05:48.602 "r_mbytes_per_sec": 0, 00:05:48.602 "w_mbytes_per_sec": 0 00:05:48.602 }, 00:05:48.602 "claimed": false, 00:05:48.602 "zoned": false, 00:05:48.602 "supported_io_types": { 00:05:48.602 "read": true, 00:05:48.602 "write": true, 00:05:48.602 "unmap": true, 00:05:48.602 "write_zeroes": true, 00:05:48.602 "flush": true, 00:05:48.602 "reset": true, 00:05:48.602 "compare": false, 00:05:48.602 "compare_and_write": false, 00:05:48.602 "abort": true, 00:05:48.602 "nvme_admin": false, 00:05:48.602 "nvme_io": false 00:05:48.602 }, 00:05:48.602 "memory_domains": [ 00:05:48.602 { 00:05:48.602 "dma_device_id": "system", 00:05:48.602 "dma_device_type": 1 00:05:48.602 }, 00:05:48.602 { 00:05:48.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.602 "dma_device_type": 2 00:05:48.602 } 00:05:48.602 ], 00:05:48.602 "driver_specific": {} 00:05:48.602 } 00:05:48.602 ]' 00:05:48.602 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:48.602 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.602 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:48.602 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.602 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.602 [2024-05-15 19:57:40.903439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:48.603 [2024-05-15 19:57:40.903491] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.603 [2024-05-15 19:57:40.903507] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa17340 00:05:48.603 [2024-05-15 19:57:40.903515] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.603 [2024-05-15 19:57:40.904945] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.603 [2024-05-15 19:57:40.904981] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.603 Passthru0 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.603 { 00:05:48.603 "name": "Malloc2", 00:05:48.603 "aliases": [ 00:05:48.603 "0084511b-8afe-4bdd-9d32-d1458b034fdb" 00:05:48.603 ], 00:05:48.603 "product_name": "Malloc disk", 00:05:48.603 "block_size": 512, 00:05:48.603 "num_blocks": 16384, 00:05:48.603 "uuid": "0084511b-8afe-4bdd-9d32-d1458b034fdb", 00:05:48.603 "assigned_rate_limits": { 00:05:48.603 "rw_ios_per_sec": 0, 00:05:48.603 "rw_mbytes_per_sec": 0, 00:05:48.603 "r_mbytes_per_sec": 0, 00:05:48.603 "w_mbytes_per_sec": 0 00:05:48.603 }, 00:05:48.603 "claimed": true, 00:05:48.603 "claim_type": "exclusive_write", 00:05:48.603 "zoned": false, 00:05:48.603 "supported_io_types": { 00:05:48.603 "read": true, 00:05:48.603 "write": true, 00:05:48.603 "unmap": true, 00:05:48.603 "write_zeroes": true, 00:05:48.603 "flush": true, 00:05:48.603 "reset": true, 00:05:48.603 "compare": false, 00:05:48.603 "compare_and_write": false, 00:05:48.603 "abort": true, 00:05:48.603 "nvme_admin": false, 00:05:48.603 "nvme_io": false 00:05:48.603 }, 00:05:48.603 "memory_domains": [ 00:05:48.603 { 00:05:48.603 "dma_device_id": "system", 00:05:48.603 "dma_device_type": 1 00:05:48.603 }, 00:05:48.603 { 00:05:48.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.603 "dma_device_type": 2 00:05:48.603 } 00:05:48.603 ], 00:05:48.603 "driver_specific": {} 00:05:48.603 }, 00:05:48.603 { 00:05:48.603 "name": "Passthru0", 00:05:48.603 "aliases": [ 00:05:48.603 "f04c413e-0ded-534b-ad88-d6a7eef830ea" 00:05:48.603 ], 00:05:48.603 "product_name": "passthru", 00:05:48.603 "block_size": 512, 00:05:48.603 "num_blocks": 16384, 00:05:48.603 "uuid": "f04c413e-0ded-534b-ad88-d6a7eef830ea", 00:05:48.603 "assigned_rate_limits": { 00:05:48.603 "rw_ios_per_sec": 0, 00:05:48.603 "rw_mbytes_per_sec": 0, 00:05:48.603 "r_mbytes_per_sec": 0, 00:05:48.603 "w_mbytes_per_sec": 0 00:05:48.603 }, 00:05:48.603 "claimed": false, 00:05:48.603 "zoned": false, 00:05:48.603 "supported_io_types": { 00:05:48.603 "read": true, 00:05:48.603 "write": true, 00:05:48.603 "unmap": true, 00:05:48.603 "write_zeroes": true, 00:05:48.603 "flush": true, 00:05:48.603 "reset": true, 00:05:48.603 "compare": false, 00:05:48.603 "compare_and_write": false, 00:05:48.603 "abort": true, 00:05:48.603 "nvme_admin": false, 00:05:48.603 "nvme_io": false 00:05:48.603 }, 00:05:48.603 "memory_domains": [ 00:05:48.603 { 00:05:48.603 "dma_device_id": "system", 00:05:48.603 "dma_device_type": 1 00:05:48.603 }, 00:05:48.603 { 00:05:48.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.603 "dma_device_type": 2 00:05:48.603 } 00:05:48.603 ], 00:05:48.603 "driver_specific": { 00:05:48.603 "passthru": { 00:05:48.603 "name": "Passthru0", 00:05:48.603 "base_bdev_name": "Malloc2" 00:05:48.603 } 00:05:48.603 } 00:05:48.603 } 00:05:48.603 ]' 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.603 19:57:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.603 19:57:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.603 19:57:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.603 19:57:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:48.603 19:57:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.603 00:05:48.603 real 0m0.299s 00:05:48.603 user 0m0.194s 00:05:48.603 sys 0m0.042s 00:05:48.603 19:57:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.603 19:57:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.603 ************************************ 00:05:48.603 END TEST rpc_daemon_integrity 00:05:48.603 ************************************ 00:05:48.603 19:57:41 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:48.603 19:57:41 rpc -- rpc/rpc.sh@84 -- # killprocess 3995176 00:05:48.603 19:57:41 rpc -- common/autotest_common.sh@946 -- # '[' -z 3995176 ']' 00:05:48.603 19:57:41 rpc -- common/autotest_common.sh@950 -- # kill -0 3995176 00:05:48.603 19:57:41 rpc -- common/autotest_common.sh@951 -- # uname 00:05:48.864 19:57:41 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:48.864 19:57:41 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3995176 00:05:48.864 19:57:41 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:48.864 19:57:41 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:48.864 19:57:41 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3995176' 00:05:48.864 killing process with pid 3995176 00:05:48.864 19:57:41 rpc -- common/autotest_common.sh@965 -- # kill 3995176 00:05:48.864 19:57:41 rpc -- common/autotest_common.sh@970 -- # wait 3995176 00:05:49.125 00:05:49.125 real 0m2.667s 00:05:49.125 user 0m3.464s 00:05:49.125 sys 0m0.819s 00:05:49.125 19:57:41 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.125 19:57:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.125 ************************************ 00:05:49.125 END TEST rpc 00:05:49.125 ************************************ 00:05:49.125 19:57:41 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:49.125 19:57:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.125 19:57:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.125 19:57:41 -- common/autotest_common.sh@10 -- # set +x 00:05:49.125 ************************************ 00:05:49.125 START TEST skip_rpc 00:05:49.125 ************************************ 00:05:49.125 19:57:41 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:49.125 * Looking for test storage... 00:05:49.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:49.125 19:57:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:49.125 19:57:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:49.125 19:57:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:49.125 19:57:41 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.125 19:57:41 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.125 19:57:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.125 ************************************ 00:05:49.125 START TEST skip_rpc 00:05:49.125 ************************************ 00:05:49.125 19:57:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:49.125 19:57:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3995886 00:05:49.125 19:57:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.125 19:57:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:49.125 19:57:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:49.387 [2024-05-15 19:57:41.674353] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:05:49.387 [2024-05-15 19:57:41.674411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3995886 ] 00:05:49.387 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.387 [2024-05-15 19:57:41.750477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.387 [2024-05-15 19:57:41.844628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.699 19:57:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:54.699 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:54.699 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:54.699 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3995886 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3995886 ']' 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3995886 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3995886 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3995886' 00:05:54.700 killing process with pid 3995886 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3995886 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3995886 00:05:54.700 00:05:54.700 real 0m5.276s 00:05:54.700 user 0m5.037s 00:05:54.700 sys 0m0.275s 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.700 19:57:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.700 ************************************ 00:05:54.700 END TEST skip_rpc 00:05:54.700 ************************************ 00:05:54.700 19:57:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:54.700 19:57:46 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:54.700 19:57:46 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.700 19:57:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.700 ************************************ 00:05:54.700 START TEST skip_rpc_with_json 00:05:54.700 ************************************ 00:05:54.700 19:57:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:54.700 19:57:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:54.700 19:57:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3996926 00:05:54.700 19:57:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.700 19:57:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3996926 00:05:54.700 19:57:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3996926 ']' 00:05:54.700 19:57:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.700 19:57:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:54.700 19:57:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.700 19:57:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:54.700 19:57:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.700 19:57:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.700 [2024-05-15 19:57:47.028048] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:05:54.700 [2024-05-15 19:57:47.028098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3996926 ] 00:05:54.700 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.700 [2024-05-15 19:57:47.113617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.700 [2024-05-15 19:57:47.181062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.642 19:57:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:55.642 19:57:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:55.642 19:57:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:55.642 19:57:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.642 19:57:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.642 [2024-05-15 19:57:47.889359] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:55.642 request: 00:05:55.642 { 00:05:55.642 "trtype": "tcp", 00:05:55.642 "method": "nvmf_get_transports", 00:05:55.642 "req_id": 1 00:05:55.642 } 00:05:55.642 Got JSON-RPC error response 00:05:55.642 response: 00:05:55.642 { 00:05:55.642 "code": -19, 00:05:55.642 "message": "No such device" 00:05:55.642 } 00:05:55.642 19:57:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:55.642 19:57:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:55.642 19:57:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.642 19:57:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.642 [2024-05-15 19:57:47.897463] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.642 19:57:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.642 19:57:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:55.642 19:57:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.642 19:57:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.642 19:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.642 19:57:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:55.642 { 00:05:55.642 "subsystems": [ 00:05:55.642 { 00:05:55.642 "subsystem": "keyring", 00:05:55.642 "config": [] 00:05:55.642 }, 00:05:55.642 { 00:05:55.642 "subsystem": "iobuf", 00:05:55.642 "config": [ 00:05:55.642 { 00:05:55.642 "method": "iobuf_set_options", 00:05:55.642 "params": { 00:05:55.642 "small_pool_count": 8192, 00:05:55.642 "large_pool_count": 1024, 00:05:55.642 "small_bufsize": 8192, 00:05:55.642 "large_bufsize": 135168 00:05:55.642 } 00:05:55.642 } 00:05:55.642 ] 00:05:55.642 }, 00:05:55.642 { 00:05:55.642 "subsystem": "sock", 00:05:55.642 "config": [ 00:05:55.642 { 00:05:55.642 "method": "sock_impl_set_options", 00:05:55.642 "params": { 00:05:55.642 "impl_name": "posix", 00:05:55.642 "recv_buf_size": 2097152, 00:05:55.642 "send_buf_size": 2097152, 00:05:55.642 "enable_recv_pipe": true, 00:05:55.642 "enable_quickack": false, 00:05:55.643 "enable_placement_id": 0, 00:05:55.643 "enable_zerocopy_send_server": true, 00:05:55.643 "enable_zerocopy_send_client": false, 00:05:55.643 "zerocopy_threshold": 0, 00:05:55.643 "tls_version": 0, 00:05:55.643 "enable_ktls": false 00:05:55.643 } 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "method": "sock_impl_set_options", 00:05:55.643 "params": { 00:05:55.643 "impl_name": "ssl", 00:05:55.643 "recv_buf_size": 4096, 00:05:55.643 "send_buf_size": 4096, 00:05:55.643 "enable_recv_pipe": true, 00:05:55.643 "enable_quickack": false, 00:05:55.643 "enable_placement_id": 0, 00:05:55.643 "enable_zerocopy_send_server": true, 00:05:55.643 "enable_zerocopy_send_client": false, 00:05:55.643 "zerocopy_threshold": 0, 00:05:55.643 "tls_version": 0, 00:05:55.643 "enable_ktls": false 00:05:55.643 } 00:05:55.643 } 00:05:55.643 ] 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "subsystem": "vmd", 00:05:55.643 "config": [] 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "subsystem": "accel", 00:05:55.643 "config": [ 00:05:55.643 { 00:05:55.643 "method": "accel_set_options", 00:05:55.643 "params": { 00:05:55.643 "small_cache_size": 128, 00:05:55.643 "large_cache_size": 16, 00:05:55.643 "task_count": 2048, 00:05:55.643 "sequence_count": 2048, 00:05:55.643 "buf_count": 2048 00:05:55.643 } 00:05:55.643 } 00:05:55.643 ] 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "subsystem": "bdev", 00:05:55.643 "config": [ 00:05:55.643 { 00:05:55.643 "method": "bdev_set_options", 00:05:55.643 "params": { 00:05:55.643 "bdev_io_pool_size": 65535, 00:05:55.643 "bdev_io_cache_size": 256, 00:05:55.643 "bdev_auto_examine": true, 00:05:55.643 "iobuf_small_cache_size": 128, 00:05:55.643 "iobuf_large_cache_size": 16 00:05:55.643 } 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "method": "bdev_raid_set_options", 00:05:55.643 "params": { 00:05:55.643 "process_window_size_kb": 1024 00:05:55.643 } 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "method": "bdev_iscsi_set_options", 00:05:55.643 "params": { 00:05:55.643 "timeout_sec": 30 00:05:55.643 } 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "method": "bdev_nvme_set_options", 00:05:55.643 "params": { 00:05:55.643 "action_on_timeout": "none", 00:05:55.643 "timeout_us": 0, 00:05:55.643 "timeout_admin_us": 0, 00:05:55.643 "keep_alive_timeout_ms": 10000, 00:05:55.643 "arbitration_burst": 0, 00:05:55.643 "low_priority_weight": 0, 00:05:55.643 "medium_priority_weight": 0, 00:05:55.643 "high_priority_weight": 0, 00:05:55.643 "nvme_adminq_poll_period_us": 10000, 00:05:55.643 "nvme_ioq_poll_period_us": 0, 00:05:55.643 "io_queue_requests": 0, 00:05:55.643 "delay_cmd_submit": true, 00:05:55.643 "transport_retry_count": 4, 00:05:55.643 "bdev_retry_count": 3, 00:05:55.643 "transport_ack_timeout": 0, 00:05:55.643 "ctrlr_loss_timeout_sec": 0, 00:05:55.643 "reconnect_delay_sec": 0, 00:05:55.643 "fast_io_fail_timeout_sec": 0, 00:05:55.643 "disable_auto_failback": false, 00:05:55.643 "generate_uuids": false, 00:05:55.643 "transport_tos": 0, 00:05:55.643 "nvme_error_stat": false, 00:05:55.643 "rdma_srq_size": 0, 00:05:55.643 "io_path_stat": false, 00:05:55.643 "allow_accel_sequence": false, 00:05:55.643 "rdma_max_cq_size": 0, 00:05:55.643 "rdma_cm_event_timeout_ms": 0, 00:05:55.643 "dhchap_digests": [ 00:05:55.643 "sha256", 00:05:55.643 "sha384", 00:05:55.643 "sha512" 00:05:55.643 ], 00:05:55.643 "dhchap_dhgroups": [ 00:05:55.643 "null", 00:05:55.643 "ffdhe2048", 00:05:55.643 "ffdhe3072", 00:05:55.643 "ffdhe4096", 00:05:55.643 "ffdhe6144", 00:05:55.643 "ffdhe8192" 00:05:55.643 ] 00:05:55.643 } 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "method": "bdev_nvme_set_hotplug", 00:05:55.643 "params": { 00:05:55.643 "period_us": 100000, 00:05:55.643 "enable": false 00:05:55.643 } 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "method": "bdev_wait_for_examine" 00:05:55.643 } 00:05:55.643 ] 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "subsystem": "scsi", 00:05:55.643 "config": null 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "subsystem": "scheduler", 00:05:55.643 "config": [ 00:05:55.643 { 00:05:55.643 "method": "framework_set_scheduler", 00:05:55.643 "params": { 00:05:55.643 "name": "static" 00:05:55.643 } 00:05:55.643 } 00:05:55.643 ] 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "subsystem": "vhost_scsi", 00:05:55.643 "config": [] 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "subsystem": "vhost_blk", 00:05:55.643 "config": [] 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "subsystem": "ublk", 00:05:55.643 "config": [] 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "subsystem": "nbd", 00:05:55.643 "config": [] 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "subsystem": "nvmf", 00:05:55.643 "config": [ 00:05:55.643 { 00:05:55.643 "method": "nvmf_set_config", 00:05:55.643 "params": { 00:05:55.643 "discovery_filter": "match_any", 00:05:55.643 "admin_cmd_passthru": { 00:05:55.643 "identify_ctrlr": false 00:05:55.643 } 00:05:55.643 } 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "method": "nvmf_set_max_subsystems", 00:05:55.643 "params": { 00:05:55.643 "max_subsystems": 1024 00:05:55.643 } 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "method": "nvmf_set_crdt", 00:05:55.643 "params": { 00:05:55.643 "crdt1": 0, 00:05:55.643 "crdt2": 0, 00:05:55.643 "crdt3": 0 00:05:55.643 } 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "method": "nvmf_create_transport", 00:05:55.643 "params": { 00:05:55.643 "trtype": "TCP", 00:05:55.643 "max_queue_depth": 128, 00:05:55.643 "max_io_qpairs_per_ctrlr": 127, 00:05:55.643 "in_capsule_data_size": 4096, 00:05:55.643 "max_io_size": 131072, 00:05:55.643 "io_unit_size": 131072, 00:05:55.643 "max_aq_depth": 128, 00:05:55.643 "num_shared_buffers": 511, 00:05:55.643 "buf_cache_size": 4294967295, 00:05:55.643 "dif_insert_or_strip": false, 00:05:55.643 "zcopy": false, 00:05:55.643 "c2h_success": true, 00:05:55.643 "sock_priority": 0, 00:05:55.643 "abort_timeout_sec": 1, 00:05:55.643 "ack_timeout": 0, 00:05:55.643 "data_wr_pool_size": 0 00:05:55.643 } 00:05:55.643 } 00:05:55.643 ] 00:05:55.643 }, 00:05:55.643 { 00:05:55.643 "subsystem": "iscsi", 00:05:55.643 "config": [ 00:05:55.643 { 00:05:55.643 "method": "iscsi_set_options", 00:05:55.643 "params": { 00:05:55.643 "node_base": "iqn.2016-06.io.spdk", 00:05:55.643 "max_sessions": 128, 00:05:55.643 "max_connections_per_session": 2, 00:05:55.643 "max_queue_depth": 64, 00:05:55.643 "default_time2wait": 2, 00:05:55.643 "default_time2retain": 20, 00:05:55.643 "first_burst_length": 8192, 00:05:55.643 "immediate_data": true, 00:05:55.643 "allow_duplicated_isid": false, 00:05:55.643 "error_recovery_level": 0, 00:05:55.643 "nop_timeout": 60, 00:05:55.643 "nop_in_interval": 30, 00:05:55.643 "disable_chap": false, 00:05:55.643 "require_chap": false, 00:05:55.643 "mutual_chap": false, 00:05:55.643 "chap_group": 0, 00:05:55.643 "max_large_datain_per_connection": 64, 00:05:55.643 "max_r2t_per_connection": 4, 00:05:55.643 "pdu_pool_size": 36864, 00:05:55.643 "immediate_data_pool_size": 16384, 00:05:55.643 "data_out_pool_size": 2048 00:05:55.643 } 00:05:55.643 } 00:05:55.643 ] 00:05:55.643 } 00:05:55.643 ] 00:05:55.643 } 00:05:55.643 19:57:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:55.643 19:57:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3996926 00:05:55.643 19:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3996926 ']' 00:05:55.643 19:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3996926 00:05:55.643 19:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:55.643 19:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:55.643 19:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3996926 00:05:55.643 19:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:55.643 19:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:55.643 19:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3996926' 00:05:55.643 killing process with pid 3996926 00:05:55.643 19:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3996926 00:05:55.643 19:57:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3996926 00:05:55.904 19:57:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3997266 00:05:55.904 19:57:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:55.904 19:57:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3997266 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3997266 ']' 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3997266 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3997266 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3997266' 00:06:01.190 killing process with pid 3997266 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3997266 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3997266 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:01.190 00:06:01.190 real 0m6.592s 00:06:01.190 user 0m6.507s 00:06:01.190 sys 0m0.557s 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.190 ************************************ 00:06:01.190 END TEST skip_rpc_with_json 00:06:01.190 ************************************ 00:06:01.190 19:57:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:01.190 19:57:53 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.190 19:57:53 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.190 19:57:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.190 ************************************ 00:06:01.190 START TEST skip_rpc_with_delay 00:06:01.190 ************************************ 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:01.190 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:01.451 [2024-05-15 19:57:53.707062] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:01.451 [2024-05-15 19:57:53.707178] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:01.451 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:01.451 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.451 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:01.451 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.451 00:06:01.451 real 0m0.076s 00:06:01.451 user 0m0.055s 00:06:01.451 sys 0m0.021s 00:06:01.451 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.451 19:57:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:01.451 ************************************ 00:06:01.451 END TEST skip_rpc_with_delay 00:06:01.451 ************************************ 00:06:01.451 19:57:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:01.451 19:57:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:01.451 19:57:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:01.451 19:57:53 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.451 19:57:53 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.451 19:57:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.451 ************************************ 00:06:01.451 START TEST exit_on_failed_rpc_init 00:06:01.451 ************************************ 00:06:01.451 19:57:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:01.451 19:57:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3998332 00:06:01.451 19:57:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3998332 00:06:01.451 19:57:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3998332 ']' 00:06:01.451 19:57:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.451 19:57:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.451 19:57:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.451 19:57:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.451 19:57:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.451 19:57:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.451 [2024-05-15 19:57:53.850704] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:01.451 [2024-05-15 19:57:53.850767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998332 ] 00:06:01.451 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.451 [2024-05-15 19:57:53.937900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.741 [2024-05-15 19:57:54.011344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.339 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.339 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:02.339 19:57:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.339 19:57:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.339 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:02.339 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.339 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.339 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.339 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.339 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.339 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.339 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.339 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:02.339 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:02.339 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:02.339 [2024-05-15 19:57:54.764449] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:02.339 [2024-05-15 19:57:54.764502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998667 ] 00:06:02.339 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.340 [2024-05-15 19:57:54.828266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.599 [2024-05-15 19:57:54.892264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.599 [2024-05-15 19:57:54.892334] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:02.599 [2024-05-15 19:57:54.892343] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:02.599 [2024-05-15 19:57:54.892350] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.599 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:02.599 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.599 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:02.599 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:02.599 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:02.599 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.599 19:57:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:02.599 19:57:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3998332 00:06:02.600 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3998332 ']' 00:06:02.600 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3998332 00:06:02.600 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:02.600 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:02.600 19:57:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3998332 00:06:02.600 19:57:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:02.600 19:57:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:02.600 19:57:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3998332' 00:06:02.600 killing process with pid 3998332 00:06:02.600 19:57:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3998332 00:06:02.600 19:57:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3998332 00:06:02.860 00:06:02.860 real 0m1.412s 00:06:02.860 user 0m1.696s 00:06:02.860 sys 0m0.386s 00:06:02.860 19:57:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.860 19:57:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:02.860 ************************************ 00:06:02.860 END TEST exit_on_failed_rpc_init 00:06:02.860 ************************************ 00:06:02.860 19:57:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:02.860 00:06:02.860 real 0m13.764s 00:06:02.860 user 0m13.437s 00:06:02.860 sys 0m1.516s 00:06:02.860 19:57:55 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.860 19:57:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.860 ************************************ 00:06:02.860 END TEST skip_rpc 00:06:02.860 ************************************ 00:06:02.860 19:57:55 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:02.860 19:57:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.860 19:57:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.860 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:06:02.860 ************************************ 00:06:02.860 START TEST rpc_client 00:06:02.860 ************************************ 00:06:02.860 19:57:55 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:03.121 * Looking for test storage... 00:06:03.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:03.121 19:57:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:03.121 OK 00:06:03.121 19:57:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:03.121 00:06:03.121 real 0m0.108s 00:06:03.121 user 0m0.040s 00:06:03.121 sys 0m0.077s 00:06:03.121 19:57:55 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.121 19:57:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:03.121 ************************************ 00:06:03.121 END TEST rpc_client 00:06:03.121 ************************************ 00:06:03.121 19:57:55 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:03.121 19:57:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.121 19:57:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.121 19:57:55 -- common/autotest_common.sh@10 -- # set +x 00:06:03.121 ************************************ 00:06:03.121 START TEST json_config 00:06:03.121 ************************************ 00:06:03.121 19:57:55 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:03.121 19:57:55 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:03.121 19:57:55 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.121 19:57:55 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.121 19:57:55 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.121 19:57:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.121 19:57:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.121 19:57:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.121 19:57:55 json_config -- paths/export.sh@5 -- # export PATH 00:06:03.121 19:57:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@47 -- # : 0 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:03.121 19:57:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.122 19:57:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.122 19:57:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.122 19:57:55 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:03.122 19:57:55 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:03.122 19:57:55 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:03.122 19:57:55 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:03.122 19:57:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:03.122 19:57:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:03.122 19:57:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:03.122 19:57:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:03.122 19:57:55 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:03.122 19:57:55 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:03.122 19:57:55 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:03.122 19:57:55 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:03.122 19:57:55 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:03.122 19:57:55 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:03.122 19:57:55 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:03.122 19:57:55 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:03.122 19:57:55 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:03.384 19:57:55 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:03.384 19:57:55 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:03.384 INFO: JSON configuration test init 00:06:03.384 19:57:55 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:03.384 19:57:55 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:03.384 19:57:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:03.384 19:57:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.384 19:57:55 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:03.384 19:57:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:03.384 19:57:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.384 19:57:55 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:03.384 19:57:55 json_config -- json_config/common.sh@9 -- # local app=target 00:06:03.384 19:57:55 json_config -- json_config/common.sh@10 -- # shift 00:06:03.384 19:57:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:03.384 19:57:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:03.384 19:57:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:03.384 19:57:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.384 19:57:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.384 19:57:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3998841 00:06:03.384 19:57:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:03.384 Waiting for target to run... 00:06:03.384 19:57:55 json_config -- json_config/common.sh@25 -- # waitforlisten 3998841 /var/tmp/spdk_tgt.sock 00:06:03.384 19:57:55 json_config -- common/autotest_common.sh@827 -- # '[' -z 3998841 ']' 00:06:03.384 19:57:55 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.384 19:57:55 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:03.384 19:57:55 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.384 19:57:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:03.384 19:57:55 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:03.384 19:57:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.384 [2024-05-15 19:57:55.691667] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:03.384 [2024-05-15 19:57:55.691738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998841 ] 00:06:03.384 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.645 [2024-05-15 19:57:56.135719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.906 [2024-05-15 19:57:56.194799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.167 19:57:56 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:04.167 19:57:56 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:04.167 19:57:56 json_config -- json_config/common.sh@26 -- # echo '' 00:06:04.167 00:06:04.167 19:57:56 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:04.167 19:57:56 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:04.167 19:57:56 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:04.167 19:57:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.167 19:57:56 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:04.167 19:57:56 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:04.167 19:57:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.167 19:57:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.167 19:57:56 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:04.167 19:57:56 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:04.167 19:57:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:04.738 19:57:57 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:04.738 19:57:57 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:04.738 19:57:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:04.738 19:57:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.738 19:57:57 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:04.738 19:57:57 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:04.738 19:57:57 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:04.738 19:57:57 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:04.738 19:57:57 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:04.738 19:57:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:04.999 19:57:57 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:04.999 19:57:57 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:04.999 19:57:57 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:04.999 19:57:57 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:04.999 19:57:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.999 19:57:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.999 19:57:57 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:04.999 19:57:57 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:04.999 19:57:57 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:04.999 19:57:57 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:04.999 19:57:57 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:04.999 19:57:57 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:04.999 19:57:57 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:04.999 19:57:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:04.999 19:57:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.999 19:57:57 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:04.999 19:57:57 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:04.999 19:57:57 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:04.999 19:57:57 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:04.999 19:57:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:05.260 MallocForNvmf0 00:06:05.260 19:57:57 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:05.260 19:57:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:05.520 MallocForNvmf1 00:06:05.520 19:57:57 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:05.520 19:57:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:05.520 [2024-05-15 19:57:57.967911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.520 19:57:57 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:05.520 19:57:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:05.781 19:57:58 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:05.781 19:57:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:06.042 19:57:58 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:06.042 19:57:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:06.302 19:57:58 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:06.302 19:57:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:06.302 [2024-05-15 19:57:58.762007] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:06.302 [2024-05-15 19:57:58.762574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:06.302 19:57:58 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:06.302 19:57:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.302 19:57:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.564 19:57:58 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:06.564 19:57:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.564 19:57:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.564 19:57:58 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:06.564 19:57:58 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:06.564 19:57:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:06.564 MallocBdevForConfigChangeCheck 00:06:06.564 19:57:59 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:06.564 19:57:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.564 19:57:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.825 19:57:59 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:06.825 19:57:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.085 19:57:59 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:07.085 INFO: shutting down applications... 00:06:07.085 19:57:59 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:07.085 19:57:59 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:07.085 19:57:59 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:07.085 19:57:59 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:07.347 Calling clear_iscsi_subsystem 00:06:07.347 Calling clear_nvmf_subsystem 00:06:07.347 Calling clear_nbd_subsystem 00:06:07.347 Calling clear_ublk_subsystem 00:06:07.347 Calling clear_vhost_blk_subsystem 00:06:07.347 Calling clear_vhost_scsi_subsystem 00:06:07.347 Calling clear_bdev_subsystem 00:06:07.608 19:57:59 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:07.608 19:57:59 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:07.608 19:57:59 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:07.608 19:57:59 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.608 19:57:59 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:07.608 19:57:59 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:07.869 19:58:00 json_config -- json_config/json_config.sh@345 -- # break 00:06:07.869 19:58:00 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:07.869 19:58:00 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:07.869 19:58:00 json_config -- json_config/common.sh@31 -- # local app=target 00:06:07.869 19:58:00 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:07.869 19:58:00 json_config -- json_config/common.sh@35 -- # [[ -n 3998841 ]] 00:06:07.869 19:58:00 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3998841 00:06:07.869 [2024-05-15 19:58:00.220238] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:07.869 19:58:00 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:07.869 19:58:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.869 19:58:00 json_config -- json_config/common.sh@41 -- # kill -0 3998841 00:06:07.869 19:58:00 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.441 19:58:00 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.441 19:58:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.441 19:58:00 json_config -- json_config/common.sh@41 -- # kill -0 3998841 00:06:08.441 19:58:00 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:08.441 19:58:00 json_config -- json_config/common.sh@43 -- # break 00:06:08.441 19:58:00 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:08.441 19:58:00 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:08.441 SPDK target shutdown done 00:06:08.441 19:58:00 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:08.441 INFO: relaunching applications... 00:06:08.441 19:58:00 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.441 19:58:00 json_config -- json_config/common.sh@9 -- # local app=target 00:06:08.441 19:58:00 json_config -- json_config/common.sh@10 -- # shift 00:06:08.441 19:58:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.441 19:58:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.441 19:58:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.441 19:58:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.441 19:58:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.441 19:58:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3999981 00:06:08.441 19:58:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.441 Waiting for target to run... 00:06:08.441 19:58:00 json_config -- json_config/common.sh@25 -- # waitforlisten 3999981 /var/tmp/spdk_tgt.sock 00:06:08.441 19:58:00 json_config -- common/autotest_common.sh@827 -- # '[' -z 3999981 ']' 00:06:08.441 19:58:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.441 19:58:00 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.441 19:58:00 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:08.441 19:58:00 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.441 19:58:00 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:08.441 19:58:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.441 [2024-05-15 19:58:00.781985] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:08.441 [2024-05-15 19:58:00.782044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3999981 ] 00:06:08.441 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.702 [2024-05-15 19:58:01.071733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.702 [2024-05-15 19:58:01.122893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.272 [2024-05-15 19:58:01.617941] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.272 [2024-05-15 19:58:01.649917] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:09.272 [2024-05-15 19:58:01.650467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:09.272 19:58:01 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.272 19:58:01 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:09.272 19:58:01 json_config -- json_config/common.sh@26 -- # echo '' 00:06:09.272 00:06:09.272 19:58:01 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:09.272 19:58:01 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:09.272 INFO: Checking if target configuration is the same... 00:06:09.272 19:58:01 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:09.272 19:58:01 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:09.272 19:58:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.272 + '[' 2 -ne 2 ']' 00:06:09.272 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:09.272 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:09.272 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:09.272 +++ basename /dev/fd/62 00:06:09.272 ++ mktemp /tmp/62.XXX 00:06:09.272 + tmp_file_1=/tmp/62.K0t 00:06:09.272 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:09.272 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:09.272 + tmp_file_2=/tmp/spdk_tgt_config.json.m24 00:06:09.272 + ret=0 00:06:09.272 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:09.533 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:09.793 + diff -u /tmp/62.K0t /tmp/spdk_tgt_config.json.m24 00:06:09.793 + echo 'INFO: JSON config files are the same' 00:06:09.793 INFO: JSON config files are the same 00:06:09.793 + rm /tmp/62.K0t /tmp/spdk_tgt_config.json.m24 00:06:09.793 + exit 0 00:06:09.793 19:58:02 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:09.793 19:58:02 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:09.793 INFO: changing configuration and checking if this can be detected... 00:06:09.793 19:58:02 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:09.793 19:58:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:09.793 19:58:02 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:09.793 19:58:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.793 19:58:02 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:09.793 + '[' 2 -ne 2 ']' 00:06:09.793 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:10.053 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:10.053 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:10.053 +++ basename /dev/fd/62 00:06:10.053 ++ mktemp /tmp/62.XXX 00:06:10.053 + tmp_file_1=/tmp/62.hi6 00:06:10.053 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.053 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:10.053 + tmp_file_2=/tmp/spdk_tgt_config.json.s3H 00:06:10.053 + ret=0 00:06:10.053 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:10.314 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:10.314 + diff -u /tmp/62.hi6 /tmp/spdk_tgt_config.json.s3H 00:06:10.314 + ret=1 00:06:10.314 + echo '=== Start of file: /tmp/62.hi6 ===' 00:06:10.314 + cat /tmp/62.hi6 00:06:10.314 + echo '=== End of file: /tmp/62.hi6 ===' 00:06:10.314 + echo '' 00:06:10.314 + echo '=== Start of file: /tmp/spdk_tgt_config.json.s3H ===' 00:06:10.314 + cat /tmp/spdk_tgt_config.json.s3H 00:06:10.314 + echo '=== End of file: /tmp/spdk_tgt_config.json.s3H ===' 00:06:10.314 + echo '' 00:06:10.314 + rm /tmp/62.hi6 /tmp/spdk_tgt_config.json.s3H 00:06:10.314 + exit 1 00:06:10.314 19:58:02 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:10.314 INFO: configuration change detected. 00:06:10.314 19:58:02 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:10.314 19:58:02 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:10.314 19:58:02 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:10.314 19:58:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.314 19:58:02 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:10.314 19:58:02 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:10.314 19:58:02 json_config -- json_config/json_config.sh@317 -- # [[ -n 3999981 ]] 00:06:10.314 19:58:02 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:10.314 19:58:02 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:10.314 19:58:02 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:10.314 19:58:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.314 19:58:02 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:10.314 19:58:02 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:10.314 19:58:02 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:10.314 19:58:02 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:10.314 19:58:02 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:10.315 19:58:02 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:10.315 19:58:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.315 19:58:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.315 19:58:02 json_config -- json_config/json_config.sh@323 -- # killprocess 3999981 00:06:10.315 19:58:02 json_config -- common/autotest_common.sh@946 -- # '[' -z 3999981 ']' 00:06:10.315 19:58:02 json_config -- common/autotest_common.sh@950 -- # kill -0 3999981 00:06:10.315 19:58:02 json_config -- common/autotest_common.sh@951 -- # uname 00:06:10.315 19:58:02 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:10.315 19:58:02 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3999981 00:06:10.592 19:58:02 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:10.592 19:58:02 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:10.592 19:58:02 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3999981' 00:06:10.592 killing process with pid 3999981 00:06:10.592 19:58:02 json_config -- common/autotest_common.sh@965 -- # kill 3999981 00:06:10.592 [2024-05-15 19:58:02.819613] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:10.592 19:58:02 json_config -- common/autotest_common.sh@970 -- # wait 3999981 00:06:10.856 19:58:03 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.856 19:58:03 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:10.856 19:58:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.856 19:58:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.856 19:58:03 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:10.856 19:58:03 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:10.856 INFO: Success 00:06:10.856 00:06:10.856 real 0m7.630s 00:06:10.856 user 0m9.722s 00:06:10.856 sys 0m1.894s 00:06:10.856 19:58:03 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.856 19:58:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.856 ************************************ 00:06:10.856 END TEST json_config 00:06:10.856 ************************************ 00:06:10.856 19:58:03 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.856 19:58:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.856 19:58:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.856 19:58:03 -- common/autotest_common.sh@10 -- # set +x 00:06:10.856 ************************************ 00:06:10.856 START TEST json_config_extra_key 00:06:10.856 ************************************ 00:06:10.856 19:58:03 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.856 19:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.856 19:58:03 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.856 19:58:03 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.856 19:58:03 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.856 19:58:03 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.856 19:58:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.856 19:58:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.857 19:58:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.857 19:58:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:10.857 19:58:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.857 19:58:03 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:10.857 19:58:03 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:10.857 19:58:03 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:10.857 19:58:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.857 19:58:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.857 19:58:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.857 19:58:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:10.857 19:58:03 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:10.857 19:58:03 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:10.857 19:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:10.857 19:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:10.857 19:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:10.857 19:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:10.857 19:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:10.857 19:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:10.857 19:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:10.857 19:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:10.857 19:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:10.857 19:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.857 19:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:10.857 INFO: launching applications... 00:06:10.857 19:58:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.857 19:58:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:10.857 19:58:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:10.857 19:58:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.857 19:58:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.857 19:58:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.857 19:58:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.857 19:58:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.857 19:58:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4000701 00:06:10.857 19:58:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.857 Waiting for target to run... 00:06:10.857 19:58:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4000701 /var/tmp/spdk_tgt.sock 00:06:10.857 19:58:03 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 4000701 ']' 00:06:10.857 19:58:03 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.857 19:58:03 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.857 19:58:03 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.857 19:58:03 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.857 19:58:03 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.857 19:58:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.118 [2024-05-15 19:58:03.386058] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:11.118 [2024-05-15 19:58:03.386127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4000701 ] 00:06:11.118 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.379 [2024-05-15 19:58:03.655913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.379 [2024-05-15 19:58:03.709348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.950 19:58:04 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:11.950 19:58:04 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:11.950 19:58:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:11.950 00:06:11.950 19:58:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:11.950 INFO: shutting down applications... 00:06:11.950 19:58:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:11.950 19:58:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:11.950 19:58:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:11.950 19:58:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4000701 ]] 00:06:11.950 19:58:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4000701 00:06:11.950 19:58:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:11.950 19:58:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.950 19:58:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4000701 00:06:11.950 19:58:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.524 19:58:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.524 19:58:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.524 19:58:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4000701 00:06:12.524 19:58:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:12.524 19:58:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:12.524 19:58:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:12.524 19:58:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:12.524 SPDK target shutdown done 00:06:12.524 19:58:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:12.524 Success 00:06:12.524 00:06:12.524 real 0m1.522s 00:06:12.524 user 0m1.261s 00:06:12.524 sys 0m0.374s 00:06:12.524 19:58:04 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.524 19:58:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:12.524 ************************************ 00:06:12.524 END TEST json_config_extra_key 00:06:12.524 ************************************ 00:06:12.524 19:58:04 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.524 19:58:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.524 19:58:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.524 19:58:04 -- common/autotest_common.sh@10 -- # set +x 00:06:12.524 ************************************ 00:06:12.524 START TEST alias_rpc 00:06:12.524 ************************************ 00:06:12.524 19:58:04 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.524 * Looking for test storage... 00:06:12.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:12.524 19:58:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.524 19:58:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4001078 00:06:12.524 19:58:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4001078 00:06:12.524 19:58:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.524 19:58:04 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 4001078 ']' 00:06:12.524 19:58:04 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.524 19:58:04 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.524 19:58:04 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.524 19:58:04 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.524 19:58:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.524 [2024-05-15 19:58:04.989188] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:12.524 [2024-05-15 19:58:04.989251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4001078 ] 00:06:12.524 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.785 [2024-05-15 19:58:05.074266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.785 [2024-05-15 19:58:05.141952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.357 19:58:05 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.357 19:58:05 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:13.357 19:58:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:13.617 19:58:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4001078 00:06:13.617 19:58:05 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 4001078 ']' 00:06:13.617 19:58:05 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 4001078 00:06:13.617 19:58:05 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:13.617 19:58:05 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:13.618 19:58:05 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4001078 00:06:13.618 19:58:06 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:13.618 19:58:06 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:13.618 19:58:06 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4001078' 00:06:13.618 killing process with pid 4001078 00:06:13.618 19:58:06 alias_rpc -- common/autotest_common.sh@965 -- # kill 4001078 00:06:13.618 19:58:06 alias_rpc -- common/autotest_common.sh@970 -- # wait 4001078 00:06:13.879 00:06:13.879 real 0m1.411s 00:06:13.879 user 0m1.597s 00:06:13.879 sys 0m0.366s 00:06:13.879 19:58:06 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.879 19:58:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.879 ************************************ 00:06:13.879 END TEST alias_rpc 00:06:13.879 ************************************ 00:06:13.879 19:58:06 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:13.879 19:58:06 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:13.879 19:58:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.879 19:58:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.879 19:58:06 -- common/autotest_common.sh@10 -- # set +x 00:06:13.879 ************************************ 00:06:13.879 START TEST spdkcli_tcp 00:06:13.879 ************************************ 00:06:13.879 19:58:06 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:14.140 * Looking for test storage... 00:06:14.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:14.140 19:58:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:14.140 19:58:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:14.140 19:58:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:14.140 19:58:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:14.140 19:58:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:14.140 19:58:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:14.140 19:58:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:14.140 19:58:06 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:14.140 19:58:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.140 19:58:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4001472 00:06:14.140 19:58:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4001472 00:06:14.140 19:58:06 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 4001472 ']' 00:06:14.140 19:58:06 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.140 19:58:06 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:14.140 19:58:06 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.140 19:58:06 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:14.141 19:58:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.141 19:58:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:14.141 [2024-05-15 19:58:06.473191] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:14.141 [2024-05-15 19:58:06.473248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4001472 ] 00:06:14.141 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.141 [2024-05-15 19:58:06.556865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.141 [2024-05-15 19:58:06.623917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.141 [2024-05-15 19:58:06.623923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.085 19:58:07 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.085 19:58:07 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:15.085 19:58:07 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4001505 00:06:15.085 19:58:07 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:15.085 19:58:07 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:15.085 [ 00:06:15.085 "bdev_malloc_delete", 00:06:15.085 "bdev_malloc_create", 00:06:15.085 "bdev_null_resize", 00:06:15.085 "bdev_null_delete", 00:06:15.085 "bdev_null_create", 00:06:15.085 "bdev_nvme_cuse_unregister", 00:06:15.085 "bdev_nvme_cuse_register", 00:06:15.085 "bdev_opal_new_user", 00:06:15.085 "bdev_opal_set_lock_state", 00:06:15.085 "bdev_opal_delete", 00:06:15.085 "bdev_opal_get_info", 00:06:15.085 "bdev_opal_create", 00:06:15.085 "bdev_nvme_opal_revert", 00:06:15.085 "bdev_nvme_opal_init", 00:06:15.085 "bdev_nvme_send_cmd", 00:06:15.085 "bdev_nvme_get_path_iostat", 00:06:15.085 "bdev_nvme_get_mdns_discovery_info", 00:06:15.085 "bdev_nvme_stop_mdns_discovery", 00:06:15.085 "bdev_nvme_start_mdns_discovery", 00:06:15.085 "bdev_nvme_set_multipath_policy", 00:06:15.085 "bdev_nvme_set_preferred_path", 00:06:15.085 "bdev_nvme_get_io_paths", 00:06:15.085 "bdev_nvme_remove_error_injection", 00:06:15.085 "bdev_nvme_add_error_injection", 00:06:15.085 "bdev_nvme_get_discovery_info", 00:06:15.085 "bdev_nvme_stop_discovery", 00:06:15.085 "bdev_nvme_start_discovery", 00:06:15.085 "bdev_nvme_get_controller_health_info", 00:06:15.085 "bdev_nvme_disable_controller", 00:06:15.085 "bdev_nvme_enable_controller", 00:06:15.085 "bdev_nvme_reset_controller", 00:06:15.085 "bdev_nvme_get_transport_statistics", 00:06:15.085 "bdev_nvme_apply_firmware", 00:06:15.085 "bdev_nvme_detach_controller", 00:06:15.085 "bdev_nvme_get_controllers", 00:06:15.085 "bdev_nvme_attach_controller", 00:06:15.085 "bdev_nvme_set_hotplug", 00:06:15.085 "bdev_nvme_set_options", 00:06:15.085 "bdev_passthru_delete", 00:06:15.085 "bdev_passthru_create", 00:06:15.085 "bdev_lvol_check_shallow_copy", 00:06:15.085 "bdev_lvol_start_shallow_copy", 00:06:15.086 "bdev_lvol_grow_lvstore", 00:06:15.086 "bdev_lvol_get_lvols", 00:06:15.086 "bdev_lvol_get_lvstores", 00:06:15.086 "bdev_lvol_delete", 00:06:15.086 "bdev_lvol_set_read_only", 00:06:15.086 "bdev_lvol_resize", 00:06:15.086 "bdev_lvol_decouple_parent", 00:06:15.086 "bdev_lvol_inflate", 00:06:15.086 "bdev_lvol_rename", 00:06:15.086 "bdev_lvol_clone_bdev", 00:06:15.086 "bdev_lvol_clone", 00:06:15.086 "bdev_lvol_snapshot", 00:06:15.086 "bdev_lvol_create", 00:06:15.086 "bdev_lvol_delete_lvstore", 00:06:15.086 "bdev_lvol_rename_lvstore", 00:06:15.086 "bdev_lvol_create_lvstore", 00:06:15.086 "bdev_raid_set_options", 00:06:15.086 "bdev_raid_remove_base_bdev", 00:06:15.086 "bdev_raid_add_base_bdev", 00:06:15.086 "bdev_raid_delete", 00:06:15.086 "bdev_raid_create", 00:06:15.086 "bdev_raid_get_bdevs", 00:06:15.086 "bdev_error_inject_error", 00:06:15.086 "bdev_error_delete", 00:06:15.086 "bdev_error_create", 00:06:15.086 "bdev_split_delete", 00:06:15.086 "bdev_split_create", 00:06:15.086 "bdev_delay_delete", 00:06:15.086 "bdev_delay_create", 00:06:15.086 "bdev_delay_update_latency", 00:06:15.086 "bdev_zone_block_delete", 00:06:15.086 "bdev_zone_block_create", 00:06:15.086 "blobfs_create", 00:06:15.086 "blobfs_detect", 00:06:15.086 "blobfs_set_cache_size", 00:06:15.086 "bdev_aio_delete", 00:06:15.086 "bdev_aio_rescan", 00:06:15.086 "bdev_aio_create", 00:06:15.086 "bdev_ftl_set_property", 00:06:15.086 "bdev_ftl_get_properties", 00:06:15.086 "bdev_ftl_get_stats", 00:06:15.086 "bdev_ftl_unmap", 00:06:15.086 "bdev_ftl_unload", 00:06:15.086 "bdev_ftl_delete", 00:06:15.086 "bdev_ftl_load", 00:06:15.086 "bdev_ftl_create", 00:06:15.086 "bdev_virtio_attach_controller", 00:06:15.086 "bdev_virtio_scsi_get_devices", 00:06:15.086 "bdev_virtio_detach_controller", 00:06:15.086 "bdev_virtio_blk_set_hotplug", 00:06:15.086 "bdev_iscsi_delete", 00:06:15.086 "bdev_iscsi_create", 00:06:15.086 "bdev_iscsi_set_options", 00:06:15.086 "accel_error_inject_error", 00:06:15.086 "ioat_scan_accel_module", 00:06:15.086 "dsa_scan_accel_module", 00:06:15.086 "iaa_scan_accel_module", 00:06:15.086 "keyring_file_remove_key", 00:06:15.086 "keyring_file_add_key", 00:06:15.086 "iscsi_get_histogram", 00:06:15.086 "iscsi_enable_histogram", 00:06:15.086 "iscsi_set_options", 00:06:15.086 "iscsi_get_auth_groups", 00:06:15.086 "iscsi_auth_group_remove_secret", 00:06:15.086 "iscsi_auth_group_add_secret", 00:06:15.086 "iscsi_delete_auth_group", 00:06:15.086 "iscsi_create_auth_group", 00:06:15.086 "iscsi_set_discovery_auth", 00:06:15.086 "iscsi_get_options", 00:06:15.086 "iscsi_target_node_request_logout", 00:06:15.086 "iscsi_target_node_set_redirect", 00:06:15.086 "iscsi_target_node_set_auth", 00:06:15.086 "iscsi_target_node_add_lun", 00:06:15.086 "iscsi_get_stats", 00:06:15.086 "iscsi_get_connections", 00:06:15.086 "iscsi_portal_group_set_auth", 00:06:15.086 "iscsi_start_portal_group", 00:06:15.086 "iscsi_delete_portal_group", 00:06:15.086 "iscsi_create_portal_group", 00:06:15.086 "iscsi_get_portal_groups", 00:06:15.086 "iscsi_delete_target_node", 00:06:15.086 "iscsi_target_node_remove_pg_ig_maps", 00:06:15.086 "iscsi_target_node_add_pg_ig_maps", 00:06:15.086 "iscsi_create_target_node", 00:06:15.086 "iscsi_get_target_nodes", 00:06:15.086 "iscsi_delete_initiator_group", 00:06:15.086 "iscsi_initiator_group_remove_initiators", 00:06:15.086 "iscsi_initiator_group_add_initiators", 00:06:15.086 "iscsi_create_initiator_group", 00:06:15.086 "iscsi_get_initiator_groups", 00:06:15.086 "nvmf_set_crdt", 00:06:15.086 "nvmf_set_config", 00:06:15.086 "nvmf_set_max_subsystems", 00:06:15.086 "nvmf_stop_mdns_prr", 00:06:15.086 "nvmf_publish_mdns_prr", 00:06:15.086 "nvmf_subsystem_get_listeners", 00:06:15.086 "nvmf_subsystem_get_qpairs", 00:06:15.086 "nvmf_subsystem_get_controllers", 00:06:15.086 "nvmf_get_stats", 00:06:15.086 "nvmf_get_transports", 00:06:15.086 "nvmf_create_transport", 00:06:15.086 "nvmf_get_targets", 00:06:15.086 "nvmf_delete_target", 00:06:15.086 "nvmf_create_target", 00:06:15.086 "nvmf_subsystem_allow_any_host", 00:06:15.086 "nvmf_subsystem_remove_host", 00:06:15.086 "nvmf_subsystem_add_host", 00:06:15.086 "nvmf_ns_remove_host", 00:06:15.086 "nvmf_ns_add_host", 00:06:15.086 "nvmf_subsystem_remove_ns", 00:06:15.086 "nvmf_subsystem_add_ns", 00:06:15.086 "nvmf_subsystem_listener_set_ana_state", 00:06:15.086 "nvmf_discovery_get_referrals", 00:06:15.086 "nvmf_discovery_remove_referral", 00:06:15.086 "nvmf_discovery_add_referral", 00:06:15.086 "nvmf_subsystem_remove_listener", 00:06:15.086 "nvmf_subsystem_add_listener", 00:06:15.086 "nvmf_delete_subsystem", 00:06:15.086 "nvmf_create_subsystem", 00:06:15.086 "nvmf_get_subsystems", 00:06:15.086 "env_dpdk_get_mem_stats", 00:06:15.086 "nbd_get_disks", 00:06:15.086 "nbd_stop_disk", 00:06:15.086 "nbd_start_disk", 00:06:15.086 "ublk_recover_disk", 00:06:15.086 "ublk_get_disks", 00:06:15.086 "ublk_stop_disk", 00:06:15.086 "ublk_start_disk", 00:06:15.086 "ublk_destroy_target", 00:06:15.086 "ublk_create_target", 00:06:15.086 "virtio_blk_create_transport", 00:06:15.086 "virtio_blk_get_transports", 00:06:15.086 "vhost_controller_set_coalescing", 00:06:15.086 "vhost_get_controllers", 00:06:15.086 "vhost_delete_controller", 00:06:15.086 "vhost_create_blk_controller", 00:06:15.086 "vhost_scsi_controller_remove_target", 00:06:15.086 "vhost_scsi_controller_add_target", 00:06:15.086 "vhost_start_scsi_controller", 00:06:15.086 "vhost_create_scsi_controller", 00:06:15.086 "thread_set_cpumask", 00:06:15.086 "framework_get_scheduler", 00:06:15.086 "framework_set_scheduler", 00:06:15.086 "framework_get_reactors", 00:06:15.086 "thread_get_io_channels", 00:06:15.086 "thread_get_pollers", 00:06:15.086 "thread_get_stats", 00:06:15.086 "framework_monitor_context_switch", 00:06:15.086 "spdk_kill_instance", 00:06:15.086 "log_enable_timestamps", 00:06:15.086 "log_get_flags", 00:06:15.086 "log_clear_flag", 00:06:15.086 "log_set_flag", 00:06:15.086 "log_get_level", 00:06:15.086 "log_set_level", 00:06:15.086 "log_get_print_level", 00:06:15.086 "log_set_print_level", 00:06:15.086 "framework_enable_cpumask_locks", 00:06:15.086 "framework_disable_cpumask_locks", 00:06:15.086 "framework_wait_init", 00:06:15.086 "framework_start_init", 00:06:15.086 "scsi_get_devices", 00:06:15.086 "bdev_get_histogram", 00:06:15.086 "bdev_enable_histogram", 00:06:15.086 "bdev_set_qos_limit", 00:06:15.086 "bdev_set_qd_sampling_period", 00:06:15.086 "bdev_get_bdevs", 00:06:15.086 "bdev_reset_iostat", 00:06:15.086 "bdev_get_iostat", 00:06:15.086 "bdev_examine", 00:06:15.086 "bdev_wait_for_examine", 00:06:15.086 "bdev_set_options", 00:06:15.086 "notify_get_notifications", 00:06:15.086 "notify_get_types", 00:06:15.086 "accel_get_stats", 00:06:15.086 "accel_set_options", 00:06:15.086 "accel_set_driver", 00:06:15.086 "accel_crypto_key_destroy", 00:06:15.086 "accel_crypto_keys_get", 00:06:15.086 "accel_crypto_key_create", 00:06:15.086 "accel_assign_opc", 00:06:15.086 "accel_get_module_info", 00:06:15.086 "accel_get_opc_assignments", 00:06:15.086 "vmd_rescan", 00:06:15.086 "vmd_remove_device", 00:06:15.086 "vmd_enable", 00:06:15.086 "sock_get_default_impl", 00:06:15.086 "sock_set_default_impl", 00:06:15.086 "sock_impl_set_options", 00:06:15.086 "sock_impl_get_options", 00:06:15.086 "iobuf_get_stats", 00:06:15.086 "iobuf_set_options", 00:06:15.086 "framework_get_pci_devices", 00:06:15.086 "framework_get_config", 00:06:15.086 "framework_get_subsystems", 00:06:15.086 "trace_get_info", 00:06:15.086 "trace_get_tpoint_group_mask", 00:06:15.086 "trace_disable_tpoint_group", 00:06:15.086 "trace_enable_tpoint_group", 00:06:15.086 "trace_clear_tpoint_mask", 00:06:15.086 "trace_set_tpoint_mask", 00:06:15.086 "keyring_get_keys", 00:06:15.086 "spdk_get_version", 00:06:15.086 "rpc_get_methods" 00:06:15.086 ] 00:06:15.086 19:58:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:15.086 19:58:07 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.086 19:58:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.086 19:58:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:15.086 19:58:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4001472 00:06:15.086 19:58:07 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 4001472 ']' 00:06:15.086 19:58:07 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 4001472 00:06:15.086 19:58:07 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:15.086 19:58:07 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:15.086 19:58:07 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4001472 00:06:15.348 19:58:07 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:15.348 19:58:07 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:15.348 19:58:07 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4001472' 00:06:15.348 killing process with pid 4001472 00:06:15.348 19:58:07 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 4001472 00:06:15.348 19:58:07 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 4001472 00:06:15.348 00:06:15.348 real 0m1.500s 00:06:15.348 user 0m2.865s 00:06:15.348 sys 0m0.433s 00:06:15.348 19:58:07 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.348 19:58:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.348 ************************************ 00:06:15.348 END TEST spdkcli_tcp 00:06:15.348 ************************************ 00:06:15.609 19:58:07 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.609 19:58:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.609 19:58:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.609 19:58:07 -- common/autotest_common.sh@10 -- # set +x 00:06:15.609 ************************************ 00:06:15.609 START TEST dpdk_mem_utility 00:06:15.609 ************************************ 00:06:15.609 19:58:07 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.609 * Looking for test storage... 00:06:15.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:15.609 19:58:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:15.609 19:58:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4001874 00:06:15.609 19:58:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4001874 00:06:15.609 19:58:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.609 19:58:08 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 4001874 ']' 00:06:15.609 19:58:08 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.609 19:58:08 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.609 19:58:08 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.609 19:58:08 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.609 19:58:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:15.609 [2024-05-15 19:58:08.061526] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:15.609 [2024-05-15 19:58:08.061591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4001874 ] 00:06:15.609 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.869 [2024-05-15 19:58:08.150251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.869 [2024-05-15 19:58:08.220837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.441 19:58:08 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:16.441 19:58:08 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:16.441 19:58:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:16.441 19:58:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:16.441 19:58:08 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.441 19:58:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.441 { 00:06:16.441 "filename": "/tmp/spdk_mem_dump.txt" 00:06:16.441 } 00:06:16.441 19:58:08 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.441 19:58:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:16.703 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:16.703 1 heaps totaling size 814.000000 MiB 00:06:16.703 size: 814.000000 MiB heap id: 0 00:06:16.703 end heaps---------- 00:06:16.703 8 mempools totaling size 598.116089 MiB 00:06:16.703 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:16.703 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:16.703 size: 84.521057 MiB name: bdev_io_4001874 00:06:16.703 size: 51.011292 MiB name: evtpool_4001874 00:06:16.703 size: 50.003479 MiB name: msgpool_4001874 00:06:16.703 size: 21.763794 MiB name: PDU_Pool 00:06:16.703 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:16.703 size: 0.026123 MiB name: Session_Pool 00:06:16.703 end mempools------- 00:06:16.703 6 memzones totaling size 4.142822 MiB 00:06:16.703 size: 1.000366 MiB name: RG_ring_0_4001874 00:06:16.703 size: 1.000366 MiB name: RG_ring_1_4001874 00:06:16.703 size: 1.000366 MiB name: RG_ring_4_4001874 00:06:16.703 size: 1.000366 MiB name: RG_ring_5_4001874 00:06:16.703 size: 0.125366 MiB name: RG_ring_2_4001874 00:06:16.703 size: 0.015991 MiB name: RG_ring_3_4001874 00:06:16.703 end memzones------- 00:06:16.703 19:58:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:16.703 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:16.703 list of free elements. size: 12.519348 MiB 00:06:16.703 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:16.703 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:16.703 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:16.703 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:16.703 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:16.703 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:16.703 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:16.703 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:16.703 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:16.703 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:16.703 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:16.703 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:16.703 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:16.703 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:16.703 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:16.703 list of standard malloc elements. size: 199.218079 MiB 00:06:16.703 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:16.703 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:16.703 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:16.703 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:16.703 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:16.703 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:16.703 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:16.703 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:16.703 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:16.703 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:16.703 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:16.703 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:16.703 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:16.703 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:16.703 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:16.703 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:16.703 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:16.703 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:16.703 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:16.703 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:16.703 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:16.703 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:16.703 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:16.703 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:16.703 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:16.703 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:16.703 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:16.703 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:16.703 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:16.703 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:16.703 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:16.703 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:16.703 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:16.703 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:16.703 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:16.703 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:16.703 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:16.703 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:16.703 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:16.703 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:16.703 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:16.703 list of memzone associated elements. size: 602.262573 MiB 00:06:16.703 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:16.703 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:16.703 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:16.703 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:16.703 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:16.703 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_4001874_0 00:06:16.703 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:16.703 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4001874_0 00:06:16.703 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:16.703 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4001874_0 00:06:16.703 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:16.703 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:16.703 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:16.703 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:16.703 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:16.703 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4001874 00:06:16.703 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:16.703 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4001874 00:06:16.703 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:16.703 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4001874 00:06:16.703 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:16.703 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:16.703 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:16.703 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:16.703 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:16.703 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:16.703 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:16.703 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:16.703 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:16.703 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4001874 00:06:16.703 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:16.703 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4001874 00:06:16.703 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:16.703 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4001874 00:06:16.703 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:16.703 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4001874 00:06:16.703 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:16.703 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4001874 00:06:16.703 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:16.703 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:16.703 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:16.703 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:16.703 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:16.703 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:16.703 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:16.703 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4001874 00:06:16.703 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:16.703 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:16.703 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:16.703 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:16.703 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:16.703 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4001874 00:06:16.703 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:16.703 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:16.703 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:16.703 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4001874 00:06:16.703 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:16.703 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4001874 00:06:16.703 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:16.703 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:16.703 19:58:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:16.703 19:58:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4001874 00:06:16.703 19:58:09 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 4001874 ']' 00:06:16.703 19:58:09 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 4001874 00:06:16.703 19:58:09 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:16.703 19:58:09 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:16.704 19:58:09 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4001874 00:06:16.704 19:58:09 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:16.704 19:58:09 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:16.704 19:58:09 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4001874' 00:06:16.704 killing process with pid 4001874 00:06:16.704 19:58:09 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 4001874 00:06:16.704 19:58:09 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 4001874 00:06:16.965 00:06:16.965 real 0m1.414s 00:06:16.965 user 0m1.586s 00:06:16.965 sys 0m0.400s 00:06:16.965 19:58:09 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.965 19:58:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.965 ************************************ 00:06:16.965 END TEST dpdk_mem_utility 00:06:16.965 ************************************ 00:06:16.965 19:58:09 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:16.965 19:58:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:16.965 19:58:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.965 19:58:09 -- common/autotest_common.sh@10 -- # set +x 00:06:16.965 ************************************ 00:06:16.965 START TEST event 00:06:16.965 ************************************ 00:06:16.965 19:58:09 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:17.226 * Looking for test storage... 00:06:17.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:17.226 19:58:09 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:17.226 19:58:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.226 19:58:09 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.226 19:58:09 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:17.226 19:58:09 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.226 19:58:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.226 ************************************ 00:06:17.226 START TEST event_perf 00:06:17.226 ************************************ 00:06:17.226 19:58:09 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.226 Running I/O for 1 seconds...[2024-05-15 19:58:09.564130] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:17.226 [2024-05-15 19:58:09.564237] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002205 ] 00:06:17.226 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.226 [2024-05-15 19:58:09.657615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.487 [2024-05-15 19:58:09.739914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.487 [2024-05-15 19:58:09.740034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.487 [2024-05-15 19:58:09.740193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.487 [2024-05-15 19:58:09.740193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.430 Running I/O for 1 seconds... 00:06:18.430 lcore 0: 169156 00:06:18.430 lcore 1: 169156 00:06:18.430 lcore 2: 169154 00:06:18.430 lcore 3: 169157 00:06:18.430 done. 00:06:18.430 00:06:18.430 real 0m1.252s 00:06:18.430 user 0m4.152s 00:06:18.430 sys 0m0.099s 00:06:18.430 19:58:10 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.430 19:58:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.430 ************************************ 00:06:18.430 END TEST event_perf 00:06:18.430 ************************************ 00:06:18.430 19:58:10 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:18.430 19:58:10 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:18.430 19:58:10 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.430 19:58:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.430 ************************************ 00:06:18.430 START TEST event_reactor 00:06:18.430 ************************************ 00:06:18.430 19:58:10 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:18.430 [2024-05-15 19:58:10.893280] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:18.430 [2024-05-15 19:58:10.893386] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002373 ] 00:06:18.430 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.692 [2024-05-15 19:58:10.981245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.692 [2024-05-15 19:58:11.049136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.634 test_start 00:06:19.634 oneshot 00:06:19.634 tick 100 00:06:19.634 tick 100 00:06:19.634 tick 250 00:06:19.634 tick 100 00:06:19.634 tick 100 00:06:19.634 tick 250 00:06:19.634 tick 100 00:06:19.634 tick 500 00:06:19.634 tick 100 00:06:19.634 tick 100 00:06:19.634 tick 250 00:06:19.634 tick 100 00:06:19.634 tick 100 00:06:19.634 test_end 00:06:19.634 00:06:19.634 real 0m1.228s 00:06:19.634 user 0m1.132s 00:06:19.634 sys 0m0.091s 00:06:19.634 19:58:12 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.634 19:58:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:19.634 ************************************ 00:06:19.634 END TEST event_reactor 00:06:19.634 ************************************ 00:06:19.634 19:58:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.634 19:58:12 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:19.634 19:58:12 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.634 19:58:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.894 ************************************ 00:06:19.894 START TEST event_reactor_perf 00:06:19.894 ************************************ 00:06:19.894 19:58:12 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.894 [2024-05-15 19:58:12.198387] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:19.894 [2024-05-15 19:58:12.198487] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4002659 ] 00:06:19.894 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.894 [2024-05-15 19:58:12.287683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.894 [2024-05-15 19:58:12.364106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.281 test_start 00:06:21.281 test_end 00:06:21.281 Performance: 367286 events per second 00:06:21.281 00:06:21.281 real 0m1.237s 00:06:21.281 user 0m1.136s 00:06:21.281 sys 0m0.096s 00:06:21.281 19:58:13 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.281 19:58:13 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.281 ************************************ 00:06:21.281 END TEST event_reactor_perf 00:06:21.281 ************************************ 00:06:21.281 19:58:13 event -- event/event.sh@49 -- # uname -s 00:06:21.281 19:58:13 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:21.281 19:58:13 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:21.281 19:58:13 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.281 19:58:13 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.281 19:58:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.281 ************************************ 00:06:21.281 START TEST event_scheduler 00:06:21.281 ************************************ 00:06:21.281 19:58:13 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:21.281 * Looking for test storage... 00:06:21.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:21.281 19:58:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:21.281 19:58:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4003039 00:06:21.281 19:58:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.281 19:58:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:21.281 19:58:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4003039 00:06:21.281 19:58:13 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 4003039 ']' 00:06:21.281 19:58:13 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.281 19:58:13 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.281 19:58:13 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.281 19:58:13 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.282 19:58:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.282 [2024-05-15 19:58:13.641644] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:21.282 [2024-05-15 19:58:13.641711] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4003039 ] 00:06:21.282 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.282 [2024-05-15 19:58:13.702790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.282 [2024-05-15 19:58:13.768842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.282 [2024-05-15 19:58:13.768969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.282 [2024-05-15 19:58:13.769125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.282 [2024-05-15 19:58:13.769126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.543 19:58:13 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:21.543 19:58:13 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:21.543 19:58:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:21.543 19:58:13 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.543 19:58:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.543 POWER: Env isn't set yet! 00:06:21.543 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:21.543 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.543 POWER: Cannot set governor of lcore 0 to userspace 00:06:21.543 POWER: Attempting to initialise PSTAT power management... 00:06:21.543 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:21.543 POWER: Initialized successfully for lcore 0 power management 00:06:21.543 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:21.543 POWER: Initialized successfully for lcore 1 power management 00:06:21.543 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:21.543 POWER: Initialized successfully for lcore 2 power management 00:06:21.543 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:21.543 POWER: Initialized successfully for lcore 3 power management 00:06:21.543 19:58:13 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.543 19:58:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:21.543 19:58:13 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.543 19:58:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.543 [2024-05-15 19:58:13.916823] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:21.543 19:58:13 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.543 19:58:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:21.543 19:58:13 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.543 19:58:13 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.543 19:58:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.543 ************************************ 00:06:21.543 START TEST scheduler_create_thread 00:06:21.543 ************************************ 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.543 2 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.543 3 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.543 4 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.543 19:58:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.543 5 00:06:21.543 19:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.543 19:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:21.543 19:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.543 19:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.543 6 00:06:21.543 19:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.543 19:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:21.543 19:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.543 19:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.805 7 00:06:21.805 19:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.805 19:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:21.805 19:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.805 19:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.805 8 00:06:21.805 19:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.805 19:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:21.805 19:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.805 19:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.066 9 00:06:22.066 19:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.066 19:58:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:22.066 19:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.066 19:58:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.451 10 00:06:23.451 19:58:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.451 19:58:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:23.451 19:58:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.451 19:58:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.389 19:58:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.389 19:58:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:24.389 19:58:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:24.389 19:58:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.389 19:58:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.804 19:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.804 19:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:24.804 19:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.804 19:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.374 19:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.374 19:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:25.374 19:58:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:25.374 19:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.374 19:58:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.027 19:58:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.027 00:06:26.027 real 0m4.466s 00:06:26.027 user 0m0.026s 00:06:26.027 sys 0m0.004s 00:06:26.027 19:58:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.027 19:58:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.027 ************************************ 00:06:26.027 END TEST scheduler_create_thread 00:06:26.027 ************************************ 00:06:26.027 19:58:18 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:26.027 19:58:18 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4003039 00:06:26.028 19:58:18 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 4003039 ']' 00:06:26.028 19:58:18 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 4003039 00:06:26.028 19:58:18 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:26.028 19:58:18 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:26.028 19:58:18 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4003039 00:06:26.028 19:58:18 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:26.028 19:58:18 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:26.028 19:58:18 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4003039' 00:06:26.028 killing process with pid 4003039 00:06:26.028 19:58:18 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 4003039 00:06:26.028 19:58:18 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 4003039 00:06:26.287 [2024-05-15 19:58:18.703139] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:26.546 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:26.546 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:26.546 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:26.546 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:26.546 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:26.546 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:26.546 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:26.546 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:26.546 00:06:26.546 real 0m5.365s 00:06:26.546 user 0m11.801s 00:06:26.546 sys 0m0.333s 00:06:26.546 19:58:18 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.546 19:58:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:26.546 ************************************ 00:06:26.546 END TEST event_scheduler 00:06:26.546 ************************************ 00:06:26.546 19:58:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:26.546 19:58:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:26.546 19:58:18 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:26.546 19:58:18 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.546 19:58:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.546 ************************************ 00:06:26.546 START TEST app_repeat 00:06:26.546 ************************************ 00:06:26.546 19:58:18 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:26.546 19:58:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.546 19:58:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.546 19:58:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:26.546 19:58:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.546 19:58:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:26.546 19:58:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:26.546 19:58:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:26.546 19:58:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4004104 00:06:26.546 19:58:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.546 19:58:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4004104' 00:06:26.546 Process app_repeat pid: 4004104 00:06:26.546 19:58:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.546 19:58:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:26.547 spdk_app_start Round 0 00:06:26.547 19:58:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4004104 /var/tmp/spdk-nbd.sock 00:06:26.547 19:58:18 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4004104 ']' 00:06:26.547 19:58:18 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.547 19:58:18 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.547 19:58:18 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.547 19:58:18 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.547 19:58:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.547 19:58:18 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:26.547 [2024-05-15 19:58:18.981095] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:26.547 [2024-05-15 19:58:18.981155] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4004104 ] 00:06:26.547 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.806 [2024-05-15 19:58:19.067240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.806 [2024-05-15 19:58:19.132878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.806 [2024-05-15 19:58:19.132885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.806 19:58:19 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.807 19:58:19 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:26.807 19:58:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.067 Malloc0 00:06:27.067 19:58:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.327 Malloc1 00:06:27.327 19:58:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.327 19:58:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.327 19:58:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.327 19:58:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.327 19:58:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.327 19:58:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.327 19:58:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.327 19:58:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.327 19:58:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.327 19:58:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.327 19:58:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.327 19:58:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.327 19:58:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:27.327 19:58:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.327 19:58:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.327 19:58:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:27.588 /dev/nbd0 00:06:27.588 19:58:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.588 19:58:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.588 19:58:19 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:27.588 19:58:19 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:27.588 19:58:19 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:27.588 19:58:19 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:27.588 19:58:19 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:27.588 19:58:19 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:27.588 19:58:19 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:27.588 19:58:19 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:27.588 19:58:19 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.588 1+0 records in 00:06:27.588 1+0 records out 00:06:27.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205192 s, 20.0 MB/s 00:06:27.588 19:58:19 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.588 19:58:19 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:27.588 19:58:19 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.588 19:58:19 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:27.588 19:58:19 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:27.588 19:58:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.588 19:58:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.588 19:58:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.588 /dev/nbd1 00:06:27.588 19:58:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.588 19:58:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.588 19:58:20 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:27.588 19:58:20 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:27.588 19:58:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:27.588 19:58:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:27.588 19:58:20 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:27.588 19:58:20 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:27.588 19:58:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:27.588 19:58:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:27.588 19:58:20 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.588 1+0 records in 00:06:27.588 1+0 records out 00:06:27.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196655 s, 20.8 MB/s 00:06:27.588 19:58:20 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.588 19:58:20 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:27.588 19:58:20 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.849 19:58:20 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:27.849 19:58:20 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:27.849 19:58:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.849 19:58:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.849 19:58:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.849 19:58:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.849 19:58:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.849 19:58:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.849 { 00:06:27.849 "nbd_device": "/dev/nbd0", 00:06:27.849 "bdev_name": "Malloc0" 00:06:27.849 }, 00:06:27.849 { 00:06:27.849 "nbd_device": "/dev/nbd1", 00:06:27.849 "bdev_name": "Malloc1" 00:06:27.849 } 00:06:27.849 ]' 00:06:27.849 19:58:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.849 { 00:06:27.849 "nbd_device": "/dev/nbd0", 00:06:27.849 "bdev_name": "Malloc0" 00:06:27.849 }, 00:06:27.849 { 00:06:27.849 "nbd_device": "/dev/nbd1", 00:06:27.849 "bdev_name": "Malloc1" 00:06:27.849 } 00:06:27.849 ]' 00:06:27.849 19:58:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.849 19:58:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.849 /dev/nbd1' 00:06:27.849 19:58:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.849 /dev/nbd1' 00:06:27.849 19:58:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.849 19:58:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.849 19:58:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.110 256+0 records in 00:06:28.110 256+0 records out 00:06:28.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124162 s, 84.5 MB/s 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.110 256+0 records in 00:06:28.110 256+0 records out 00:06:28.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187781 s, 55.8 MB/s 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.110 256+0 records in 00:06:28.110 256+0 records out 00:06:28.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172827 s, 60.7 MB/s 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.110 19:58:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.371 19:58:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.631 19:58:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.631 19:58:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.631 19:58:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.631 19:58:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.631 19:58:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.631 19:58:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.631 19:58:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.631 19:58:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.631 19:58:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.631 19:58:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.631 19:58:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.631 19:58:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.631 19:58:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.892 19:58:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.153 [2024-05-15 19:58:21.442037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.153 [2024-05-15 19:58:21.506852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.153 [2024-05-15 19:58:21.506858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.153 [2024-05-15 19:58:21.538878] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.153 [2024-05-15 19:58:21.538910] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:32.453 19:58:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:32.453 19:58:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:32.453 spdk_app_start Round 1 00:06:32.453 19:58:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4004104 /var/tmp/spdk-nbd.sock 00:06:32.453 19:58:24 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4004104 ']' 00:06:32.453 19:58:24 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.453 19:58:24 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.453 19:58:24 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.453 19:58:24 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.453 19:58:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.453 19:58:24 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.453 19:58:24 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:32.453 19:58:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.453 Malloc0 00:06:32.453 19:58:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.453 Malloc1 00:06:32.453 19:58:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.453 19:58:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.453 19:58:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.453 19:58:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.453 19:58:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.453 19:58:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.453 19:58:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.453 19:58:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.453 19:58:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.453 19:58:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.453 19:58:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.453 19:58:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.453 19:58:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.453 19:58:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.453 19:58:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.453 19:58:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.713 /dev/nbd0 00:06:32.713 19:58:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.713 19:58:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.713 19:58:25 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:32.713 19:58:25 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:32.713 19:58:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:32.713 19:58:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:32.713 19:58:25 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:32.713 19:58:25 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:32.713 19:58:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:32.713 19:58:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:32.713 19:58:25 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.713 1+0 records in 00:06:32.713 1+0 records out 00:06:32.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275851 s, 14.8 MB/s 00:06:32.713 19:58:25 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.713 19:58:25 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:32.713 19:58:25 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.713 19:58:25 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:32.713 19:58:25 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:32.713 19:58:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.713 19:58:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.713 19:58:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.973 /dev/nbd1 00:06:32.973 19:58:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.973 19:58:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.973 19:58:25 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:32.973 19:58:25 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:32.973 19:58:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:32.973 19:58:25 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:32.973 19:58:25 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:32.973 19:58:25 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:32.973 19:58:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:32.973 19:58:25 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:32.973 19:58:25 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.973 1+0 records in 00:06:32.973 1+0 records out 00:06:32.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211936 s, 19.3 MB/s 00:06:32.973 19:58:25 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.973 19:58:25 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:32.973 19:58:25 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.973 19:58:25 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:32.973 19:58:25 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:32.973 19:58:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.973 19:58:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.973 19:58:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.973 19:58:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.973 19:58:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.233 { 00:06:33.233 "nbd_device": "/dev/nbd0", 00:06:33.233 "bdev_name": "Malloc0" 00:06:33.233 }, 00:06:33.233 { 00:06:33.233 "nbd_device": "/dev/nbd1", 00:06:33.233 "bdev_name": "Malloc1" 00:06:33.233 } 00:06:33.233 ]' 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.233 { 00:06:33.233 "nbd_device": "/dev/nbd0", 00:06:33.233 "bdev_name": "Malloc0" 00:06:33.233 }, 00:06:33.233 { 00:06:33.233 "nbd_device": "/dev/nbd1", 00:06:33.233 "bdev_name": "Malloc1" 00:06:33.233 } 00:06:33.233 ]' 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.233 /dev/nbd1' 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.233 /dev/nbd1' 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.233 256+0 records in 00:06:33.233 256+0 records out 00:06:33.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117923 s, 88.9 MB/s 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.233 256+0 records in 00:06:33.233 256+0 records out 00:06:33.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164896 s, 63.6 MB/s 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.233 256+0 records in 00:06:33.233 256+0 records out 00:06:33.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200913 s, 52.2 MB/s 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.233 19:58:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.493 19:58:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.493 19:58:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.493 19:58:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.493 19:58:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.493 19:58:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.494 19:58:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.494 19:58:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.494 19:58:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.494 19:58:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.494 19:58:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.754 19:58:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.754 19:58:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.754 19:58:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.754 19:58:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.754 19:58:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.754 19:58:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.754 19:58:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.754 19:58:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.754 19:58:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.754 19:58:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.754 19:58:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.754 19:58:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.754 19:58:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.754 19:58:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.015 19:58:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.015 19:58:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.015 19:58:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.015 19:58:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.015 19:58:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.015 19:58:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.015 19:58:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.015 19:58:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.015 19:58:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.015 19:58:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.015 19:58:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.276 [2024-05-15 19:58:26.621289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.276 [2024-05-15 19:58:26.683792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.276 [2024-05-15 19:58:26.683798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.276 [2024-05-15 19:58:26.716426] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.276 [2024-05-15 19:58:26.716461] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.578 19:58:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.578 19:58:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:37.578 spdk_app_start Round 2 00:06:37.578 19:58:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4004104 /var/tmp/spdk-nbd.sock 00:06:37.578 19:58:29 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4004104 ']' 00:06:37.578 19:58:29 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.578 19:58:29 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.578 19:58:29 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.578 19:58:29 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.578 19:58:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.578 19:58:29 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.578 19:58:29 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:37.578 19:58:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.578 Malloc0 00:06:37.578 19:58:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.578 Malloc1 00:06:37.578 19:58:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.578 19:58:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.578 19:58:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.578 19:58:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:37.578 19:58:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.578 19:58:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:37.578 19:58:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.578 19:58:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.578 19:58:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.578 19:58:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:37.578 19:58:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.578 19:58:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:37.578 19:58:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:37.578 19:58:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:37.578 19:58:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.578 19:58:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.865 /dev/nbd0 00:06:37.865 19:58:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.865 19:58:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.865 19:58:30 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:37.865 19:58:30 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:37.865 19:58:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:37.865 19:58:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:37.865 19:58:30 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:37.865 19:58:30 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:37.865 19:58:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:37.865 19:58:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:37.865 19:58:30 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.865 1+0 records in 00:06:37.865 1+0 records out 00:06:37.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225663 s, 18.2 MB/s 00:06:37.865 19:58:30 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.865 19:58:30 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:37.865 19:58:30 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:37.865 19:58:30 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:37.865 19:58:30 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:37.865 19:58:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.865 19:58:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.865 19:58:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:38.126 /dev/nbd1 00:06:38.126 19:58:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.126 19:58:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.126 19:58:30 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:38.126 19:58:30 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:38.126 19:58:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:38.126 19:58:30 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:38.126 19:58:30 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:38.126 19:58:30 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:38.126 19:58:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:38.126 19:58:30 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:38.126 19:58:30 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.126 1+0 records in 00:06:38.126 1+0 records out 00:06:38.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255847 s, 16.0 MB/s 00:06:38.126 19:58:30 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.126 19:58:30 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:38.126 19:58:30 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:38.126 19:58:30 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:38.126 19:58:30 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:38.126 19:58:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.126 19:58:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.126 19:58:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.127 19:58:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.127 19:58:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:38.388 { 00:06:38.388 "nbd_device": "/dev/nbd0", 00:06:38.388 "bdev_name": "Malloc0" 00:06:38.388 }, 00:06:38.388 { 00:06:38.388 "nbd_device": "/dev/nbd1", 00:06:38.388 "bdev_name": "Malloc1" 00:06:38.388 } 00:06:38.388 ]' 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:38.388 { 00:06:38.388 "nbd_device": "/dev/nbd0", 00:06:38.388 "bdev_name": "Malloc0" 00:06:38.388 }, 00:06:38.388 { 00:06:38.388 "nbd_device": "/dev/nbd1", 00:06:38.388 "bdev_name": "Malloc1" 00:06:38.388 } 00:06:38.388 ]' 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:38.388 /dev/nbd1' 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:38.388 /dev/nbd1' 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:38.388 256+0 records in 00:06:38.388 256+0 records out 00:06:38.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124743 s, 84.1 MB/s 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:38.388 256+0 records in 00:06:38.388 256+0 records out 00:06:38.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.041046 s, 25.5 MB/s 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.388 256+0 records in 00:06:38.388 256+0 records out 00:06:38.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0338112 s, 31.0 MB/s 00:06:38.388 19:58:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.389 19:58:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.649 19:58:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.649 19:58:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.649 19:58:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.649 19:58:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.649 19:58:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.649 19:58:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.649 19:58:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.649 19:58:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.649 19:58:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.650 19:58:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.911 19:58:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.911 19:58:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.911 19:58:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.911 19:58:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.911 19:58:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.911 19:58:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.911 19:58:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.911 19:58:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.911 19:58:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.911 19:58:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.911 19:58:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.173 19:58:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.173 19:58:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.173 19:58:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.173 19:58:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.173 19:58:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.173 19:58:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.173 19:58:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:39.173 19:58:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.173 19:58:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.173 19:58:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.173 19:58:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.173 19:58:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.173 19:58:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.433 19:58:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:39.434 [2024-05-15 19:58:31.823990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.434 [2024-05-15 19:58:31.886333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.434 [2024-05-15 19:58:31.886339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.434 [2024-05-15 19:58:31.918269] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:39.434 [2024-05-15 19:58:31.918304] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.734 19:58:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4004104 /var/tmp/spdk-nbd.sock 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 4004104 ']' 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:42.734 19:58:34 event.app_repeat -- event/event.sh@39 -- # killprocess 4004104 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 4004104 ']' 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 4004104 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4004104 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4004104' 00:06:42.734 killing process with pid 4004104 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@965 -- # kill 4004104 00:06:42.734 19:58:34 event.app_repeat -- common/autotest_common.sh@970 -- # wait 4004104 00:06:42.734 spdk_app_start is called in Round 0. 00:06:42.734 Shutdown signal received, stop current app iteration 00:06:42.734 Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 reinitialization... 00:06:42.734 spdk_app_start is called in Round 1. 00:06:42.734 Shutdown signal received, stop current app iteration 00:06:42.734 Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 reinitialization... 00:06:42.734 spdk_app_start is called in Round 2. 00:06:42.734 Shutdown signal received, stop current app iteration 00:06:42.734 Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 reinitialization... 00:06:42.734 spdk_app_start is called in Round 3. 00:06:42.734 Shutdown signal received, stop current app iteration 00:06:42.734 19:58:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:42.734 19:58:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:42.734 00:06:42.734 real 0m16.117s 00:06:42.734 user 0m35.361s 00:06:42.734 sys 0m2.250s 00:06:42.734 19:58:35 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.734 19:58:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.734 ************************************ 00:06:42.734 END TEST app_repeat 00:06:42.734 ************************************ 00:06:42.734 19:58:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:42.734 19:58:35 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:42.734 19:58:35 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:42.734 19:58:35 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.734 19:58:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.734 ************************************ 00:06:42.734 START TEST cpu_locks 00:06:42.734 ************************************ 00:06:42.734 19:58:35 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:42.995 * Looking for test storage... 00:06:42.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:42.995 19:58:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:42.995 19:58:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:42.995 19:58:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:42.995 19:58:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:42.995 19:58:35 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:42.995 19:58:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.995 19:58:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.995 ************************************ 00:06:42.995 START TEST default_locks 00:06:42.995 ************************************ 00:06:42.995 19:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:42.995 19:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4007676 00:06:42.995 19:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4007676 00:06:42.995 19:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 4007676 ']' 00:06:42.995 19:58:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.995 19:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.995 19:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.995 19:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.995 19:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.995 19:58:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.995 [2024-05-15 19:58:35.351834] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:42.995 [2024-05-15 19:58:35.351896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4007676 ] 00:06:42.995 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.995 [2024-05-15 19:58:35.439230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.257 [2024-05-15 19:58:35.513156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.829 19:58:36 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.829 19:58:36 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:43.829 19:58:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4007676 00:06:43.829 19:58:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4007676 00:06:43.829 19:58:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.401 lslocks: write error 00:06:44.401 19:58:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4007676 00:06:44.401 19:58:36 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 4007676 ']' 00:06:44.401 19:58:36 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 4007676 00:06:44.401 19:58:36 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:44.401 19:58:36 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:44.401 19:58:36 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4007676 00:06:44.401 19:58:36 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:44.401 19:58:36 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:44.401 19:58:36 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4007676' 00:06:44.401 killing process with pid 4007676 00:06:44.401 19:58:36 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 4007676 00:06:44.401 19:58:36 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 4007676 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4007676 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4007676 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 4007676 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 4007676 ']' 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (4007676) - No such process 00:06:44.662 ERROR: process (pid: 4007676) is no longer running 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:44.662 00:06:44.662 real 0m1.724s 00:06:44.662 user 0m1.867s 00:06:44.662 sys 0m0.586s 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.662 19:58:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.662 ************************************ 00:06:44.662 END TEST default_locks 00:06:44.662 ************************************ 00:06:44.662 19:58:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:44.662 19:58:37 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.662 19:58:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.662 19:58:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.662 ************************************ 00:06:44.662 START TEST default_locks_via_rpc 00:06:44.662 ************************************ 00:06:44.662 19:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:44.662 19:58:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4008047 00:06:44.662 19:58:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4008047 00:06:44.662 19:58:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.662 19:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 4008047 ']' 00:06:44.662 19:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.662 19:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.662 19:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.663 19:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.663 19:58:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.663 [2024-05-15 19:58:37.156614] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:44.663 [2024-05-15 19:58:37.156662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4008047 ] 00:06:44.924 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.924 [2024-05-15 19:58:37.240590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.924 [2024-05-15 19:58:37.305305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.865 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4008047 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4008047 00:06:45.866 19:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.127 19:58:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4008047 00:06:46.127 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 4008047 ']' 00:06:46.127 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 4008047 00:06:46.127 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:46.127 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:46.127 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4008047 00:06:46.127 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:46.127 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:46.127 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4008047' 00:06:46.127 killing process with pid 4008047 00:06:46.127 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 4008047 00:06:46.127 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 4008047 00:06:46.389 00:06:46.389 real 0m1.634s 00:06:46.389 user 0m1.802s 00:06:46.389 sys 0m0.509s 00:06:46.389 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.389 19:58:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.389 ************************************ 00:06:46.389 END TEST default_locks_via_rpc 00:06:46.389 ************************************ 00:06:46.389 19:58:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:46.389 19:58:38 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.389 19:58:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.389 19:58:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.389 ************************************ 00:06:46.389 START TEST non_locking_app_on_locked_coremask 00:06:46.389 ************************************ 00:06:46.389 19:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:46.389 19:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4008410 00:06:46.389 19:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4008410 /var/tmp/spdk.sock 00:06:46.389 19:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.389 19:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4008410 ']' 00:06:46.389 19:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.389 19:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.389 19:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.389 19:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.389 19:58:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.389 [2024-05-15 19:58:38.871170] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:46.389 [2024-05-15 19:58:38.871220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4008410 ] 00:06:46.649 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.649 [2024-05-15 19:58:38.952567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.649 [2024-05-15 19:58:39.017751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.591 19:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.591 19:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:47.591 19:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4008534 00:06:47.591 19:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4008534 /var/tmp/spdk2.sock 00:06:47.591 19:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4008534 ']' 00:06:47.591 19:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:47.591 19:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.591 19:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:47.592 19:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.592 19:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:47.592 19:58:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.592 [2024-05-15 19:58:39.779403] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:47.592 [2024-05-15 19:58:39.779456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4008534 ] 00:06:47.592 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.592 [2024-05-15 19:58:39.880151] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.592 [2024-05-15 19:58:39.880181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.592 [2024-05-15 19:58:40.009509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.163 19:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:48.163 19:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:48.163 19:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4008410 00:06:48.163 19:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4008410 00:06:48.163 19:58:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.735 lslocks: write error 00:06:48.735 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4008410 00:06:48.735 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 4008410 ']' 00:06:48.735 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 4008410 00:06:48.735 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:48.735 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:48.735 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4008410 00:06:48.996 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:48.996 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:48.996 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4008410' 00:06:48.996 killing process with pid 4008410 00:06:48.996 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 4008410 00:06:48.996 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 4008410 00:06:49.257 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4008534 00:06:49.257 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 4008534 ']' 00:06:49.257 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 4008534 00:06:49.257 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:49.257 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:49.257 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4008534 00:06:49.257 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:49.257 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:49.257 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4008534' 00:06:49.257 killing process with pid 4008534 00:06:49.257 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 4008534 00:06:49.257 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 4008534 00:06:49.518 00:06:49.518 real 0m3.098s 00:06:49.518 user 0m3.507s 00:06:49.518 sys 0m0.923s 00:06:49.518 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.518 19:58:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.518 ************************************ 00:06:49.518 END TEST non_locking_app_on_locked_coremask 00:06:49.518 ************************************ 00:06:49.518 19:58:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:49.518 19:58:41 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:49.518 19:58:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.518 19:58:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.518 ************************************ 00:06:49.518 START TEST locking_app_on_unlocked_coremask 00:06:49.518 ************************************ 00:06:49.518 19:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:49.518 19:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4009125 00:06:49.518 19:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4009125 /var/tmp/spdk.sock 00:06:49.518 19:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:49.518 19:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4009125 ']' 00:06:49.519 19:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.519 19:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:49.519 19:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.519 19:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:49.519 19:58:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.780 [2024-05-15 19:58:42.048939] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:49.780 [2024-05-15 19:58:42.048991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009125 ] 00:06:49.780 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.780 [2024-05-15 19:58:42.130848] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.780 [2024-05-15 19:58:42.130877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.780 [2024-05-15 19:58:42.197901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.722 19:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.722 19:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:50.722 19:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4009153 00:06:50.722 19:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4009153 /var/tmp/spdk2.sock 00:06:50.722 19:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4009153 ']' 00:06:50.722 19:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:50.722 19:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.722 19:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.722 19:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.722 19:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.722 19:58:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.722 [2024-05-15 19:58:42.960297] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:50.722 [2024-05-15 19:58:42.960354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009153 ] 00:06:50.722 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.722 [2024-05-15 19:58:43.057892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.722 [2024-05-15 19:58:43.189115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.666 19:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.667 19:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:51.667 19:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4009153 00:06:51.667 19:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4009153 00:06:51.667 19:58:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.928 lslocks: write error 00:06:51.928 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4009125 00:06:51.928 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 4009125 ']' 00:06:51.928 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 4009125 00:06:51.928 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:51.928 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:51.928 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4009125 00:06:51.928 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:51.928 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:51.928 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4009125' 00:06:51.928 killing process with pid 4009125 00:06:51.928 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 4009125 00:06:51.928 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 4009125 00:06:52.499 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4009153 00:06:52.499 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 4009153 ']' 00:06:52.499 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 4009153 00:06:52.499 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:52.499 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:52.499 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4009153 00:06:52.499 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:52.499 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:52.499 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4009153' 00:06:52.499 killing process with pid 4009153 00:06:52.499 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 4009153 00:06:52.499 19:58:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 4009153 00:06:52.760 00:06:52.760 real 0m3.110s 00:06:52.760 user 0m3.527s 00:06:52.760 sys 0m0.925s 00:06:52.760 19:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.760 19:58:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.760 ************************************ 00:06:52.760 END TEST locking_app_on_unlocked_coremask 00:06:52.760 ************************************ 00:06:52.760 19:58:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:52.760 19:58:45 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:52.760 19:58:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.760 19:58:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.760 ************************************ 00:06:52.760 START TEST locking_app_on_locked_coremask 00:06:52.760 ************************************ 00:06:52.760 19:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:52.760 19:58:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4009822 00:06:52.760 19:58:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4009822 /var/tmp/spdk.sock 00:06:52.760 19:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4009822 ']' 00:06:52.760 19:58:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.760 19:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.760 19:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:52.760 19:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.760 19:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:52.760 19:58:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.760 [2024-05-15 19:58:45.238672] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:52.760 [2024-05-15 19:58:45.238728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009822 ] 00:06:53.020 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.020 [2024-05-15 19:58:45.321955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.021 [2024-05-15 19:58:45.389525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4009848 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4009848 /var/tmp/spdk2.sock 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4009848 /var/tmp/spdk2.sock 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4009848 /var/tmp/spdk2.sock 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 4009848 ']' 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:53.961 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.961 [2024-05-15 19:58:46.151889] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:53.961 [2024-05-15 19:58:46.151946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4009848 ] 00:06:53.961 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.961 [2024-05-15 19:58:46.250923] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4009822 has claimed it. 00:06:53.961 [2024-05-15 19:58:46.250962] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:54.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (4009848) - No such process 00:06:54.533 ERROR: process (pid: 4009848) is no longer running 00:06:54.533 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.533 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:54.533 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:54.533 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.533 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:54.533 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.533 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4009822 00:06:54.533 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4009822 00:06:54.533 19:58:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.174 lslocks: write error 00:06:55.174 19:58:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4009822 00:06:55.174 19:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 4009822 ']' 00:06:55.174 19:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 4009822 00:06:55.174 19:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:55.174 19:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:55.174 19:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4009822 00:06:55.174 19:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:55.174 19:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:55.174 19:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4009822' 00:06:55.174 killing process with pid 4009822 00:06:55.174 19:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 4009822 00:06:55.174 19:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 4009822 00:06:55.174 00:06:55.174 real 0m2.376s 00:06:55.174 user 0m2.751s 00:06:55.174 sys 0m0.630s 00:06:55.174 19:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.174 19:58:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.174 ************************************ 00:06:55.174 END TEST locking_app_on_locked_coremask 00:06:55.174 ************************************ 00:06:55.174 19:58:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:55.174 19:58:47 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:55.174 19:58:47 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.174 19:58:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.174 ************************************ 00:06:55.174 START TEST locking_overlapped_coremask 00:06:55.174 ************************************ 00:06:55.174 19:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:55.174 19:58:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4010213 00:06:55.174 19:58:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4010213 /var/tmp/spdk.sock 00:06:55.174 19:58:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:55.174 19:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 4010213 ']' 00:06:55.174 19:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.174 19:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:55.174 19:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.174 19:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:55.174 19:58:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.435 [2024-05-15 19:58:47.700276] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:55.435 [2024-05-15 19:58:47.700344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010213 ] 00:06:55.435 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.435 [2024-05-15 19:58:47.783335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.435 [2024-05-15 19:58:47.852294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.435 [2024-05-15 19:58:47.852443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.435 [2024-05-15 19:58:47.852537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4010502 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4010502 /var/tmp/spdk2.sock 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4010502 /var/tmp/spdk2.sock 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4010502 /var/tmp/spdk2.sock 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 4010502 ']' 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.378 19:58:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.378 [2024-05-15 19:58:48.564684] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:56.378 [2024-05-15 19:58:48.564736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010502 ] 00:06:56.378 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.378 [2024-05-15 19:58:48.644499] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4010213 has claimed it. 00:06:56.378 [2024-05-15 19:58:48.644530] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:56.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (4010502) - No such process 00:06:56.950 ERROR: process (pid: 4010502) is no longer running 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4010213 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 4010213 ']' 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 4010213 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4010213 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4010213' 00:06:56.950 killing process with pid 4010213 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 4010213 00:06:56.950 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 4010213 00:06:57.210 00:06:57.210 real 0m1.855s 00:06:57.210 user 0m5.308s 00:06:57.210 sys 0m0.391s 00:06:57.210 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.210 19:58:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.210 ************************************ 00:06:57.210 END TEST locking_overlapped_coremask 00:06:57.210 ************************************ 00:06:57.210 19:58:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:57.210 19:58:49 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:57.210 19:58:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.210 19:58:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.211 ************************************ 00:06:57.211 START TEST locking_overlapped_coremask_via_rpc 00:06:57.211 ************************************ 00:06:57.211 19:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:57.211 19:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4010588 00:06:57.211 19:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4010588 /var/tmp/spdk.sock 00:06:57.211 19:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:57.211 19:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 4010588 ']' 00:06:57.211 19:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.211 19:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:57.211 19:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.211 19:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:57.211 19:58:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.211 [2024-05-15 19:58:49.633427] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:57.211 [2024-05-15 19:58:49.633475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010588 ] 00:06:57.211 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.471 [2024-05-15 19:58:49.716289] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.471 [2024-05-15 19:58:49.716328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.471 [2024-05-15 19:58:49.784926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.471 [2024-05-15 19:58:49.785041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.471 [2024-05-15 19:58:49.785045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.044 19:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:58.044 19:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:58.044 19:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4010922 00:06:58.044 19:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4010922 /var/tmp/spdk2.sock 00:06:58.044 19:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 4010922 ']' 00:06:58.044 19:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:58.044 19:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.044 19:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:58.044 19:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.044 19:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:58.044 19:58:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.306 [2024-05-15 19:58:50.558033] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:06:58.306 [2024-05-15 19:58:50.558084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010922 ] 00:06:58.306 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.306 [2024-05-15 19:58:50.640236] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.306 [2024-05-15 19:58:50.640263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.306 [2024-05-15 19:58:50.745746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.306 [2024-05-15 19:58:50.745908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.306 [2024-05-15 19:58:50.745911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.248 [2024-05-15 19:58:51.438375] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4010588 has claimed it. 00:06:59.248 request: 00:06:59.248 { 00:06:59.248 "method": "framework_enable_cpumask_locks", 00:06:59.248 "req_id": 1 00:06:59.248 } 00:06:59.248 Got JSON-RPC error response 00:06:59.248 response: 00:06:59.248 { 00:06:59.248 "code": -32603, 00:06:59.248 "message": "Failed to claim CPU core: 2" 00:06:59.248 } 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4010588 /var/tmp/spdk.sock 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 4010588 ']' 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.248 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:59.249 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:59.249 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4010922 /var/tmp/spdk2.sock 00:06:59.249 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 4010922 ']' 00:06:59.249 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.249 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:59.249 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.249 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:59.249 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.510 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:59.510 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:59.510 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:59.510 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:59.510 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:59.510 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:59.510 00:06:59.510 real 0m2.305s 00:06:59.510 user 0m1.044s 00:06:59.510 sys 0m0.178s 00:06:59.510 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.510 19:58:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.510 ************************************ 00:06:59.510 END TEST locking_overlapped_coremask_via_rpc 00:06:59.510 ************************************ 00:06:59.510 19:58:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:59.510 19:58:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4010588 ]] 00:06:59.510 19:58:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4010588 00:06:59.510 19:58:51 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 4010588 ']' 00:06:59.510 19:58:51 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 4010588 00:06:59.510 19:58:51 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:59.510 19:58:51 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:59.510 19:58:51 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4010588 00:06:59.510 19:58:51 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:59.510 19:58:51 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:59.510 19:58:51 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4010588' 00:06:59.510 killing process with pid 4010588 00:06:59.510 19:58:51 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 4010588 00:06:59.510 19:58:51 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 4010588 00:06:59.770 19:58:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4010922 ]] 00:06:59.770 19:58:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4010922 00:06:59.770 19:58:52 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 4010922 ']' 00:06:59.770 19:58:52 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 4010922 00:06:59.770 19:58:52 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:59.770 19:58:52 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:59.770 19:58:52 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4010922 00:06:59.770 19:58:52 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:59.770 19:58:52 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:59.770 19:58:52 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4010922' 00:06:59.770 killing process with pid 4010922 00:06:59.770 19:58:52 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 4010922 00:06:59.770 19:58:52 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 4010922 00:07:00.032 19:58:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.032 19:58:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:00.032 19:58:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4010588 ]] 00:07:00.032 19:58:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4010588 00:07:00.032 19:58:52 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 4010588 ']' 00:07:00.032 19:58:52 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 4010588 00:07:00.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (4010588) - No such process 00:07:00.032 19:58:52 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 4010588 is not found' 00:07:00.032 Process with pid 4010588 is not found 00:07:00.032 19:58:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4010922 ]] 00:07:00.032 19:58:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4010922 00:07:00.032 19:58:52 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 4010922 ']' 00:07:00.032 19:58:52 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 4010922 00:07:00.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (4010922) - No such process 00:07:00.032 19:58:52 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 4010922 is not found' 00:07:00.032 Process with pid 4010922 is not found 00:07:00.032 19:58:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.032 00:07:00.032 real 0m17.296s 00:07:00.032 user 0m30.604s 00:07:00.032 sys 0m5.055s 00:07:00.032 19:58:52 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.032 19:58:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.032 ************************************ 00:07:00.032 END TEST cpu_locks 00:07:00.032 ************************************ 00:07:00.032 00:07:00.032 real 0m43.089s 00:07:00.032 user 1m24.402s 00:07:00.032 sys 0m8.309s 00:07:00.032 19:58:52 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.032 19:58:52 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.032 ************************************ 00:07:00.032 END TEST event 00:07:00.032 ************************************ 00:07:00.032 19:58:52 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:00.032 19:58:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:00.032 19:58:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.032 19:58:52 -- common/autotest_common.sh@10 -- # set +x 00:07:00.294 ************************************ 00:07:00.294 START TEST thread 00:07:00.294 ************************************ 00:07:00.294 19:58:52 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:00.294 * Looking for test storage... 00:07:00.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:00.294 19:58:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.294 19:58:52 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:00.294 19:58:52 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.294 19:58:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.294 ************************************ 00:07:00.294 START TEST thread_poller_perf 00:07:00.294 ************************************ 00:07:00.294 19:58:52 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.294 [2024-05-15 19:58:52.730837] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:00.294 [2024-05-15 19:58:52.730937] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011359 ] 00:07:00.294 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.554 [2024-05-15 19:58:52.826014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.554 [2024-05-15 19:58:52.905060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.554 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:01.495 ====================================== 00:07:01.495 busy:2412681014 (cyc) 00:07:01.495 total_run_count: 288000 00:07:01.495 tsc_hz: 2400000000 (cyc) 00:07:01.495 ====================================== 00:07:01.495 poller_cost: 8377 (cyc), 3490 (nsec) 00:07:01.495 00:07:01.495 real 0m1.260s 00:07:01.495 user 0m1.152s 00:07:01.495 sys 0m0.102s 00:07:01.495 19:58:53 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.495 19:58:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.495 ************************************ 00:07:01.495 END TEST thread_poller_perf 00:07:01.495 ************************************ 00:07:01.756 19:58:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.756 19:58:54 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:01.756 19:58:54 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.756 19:58:54 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.756 ************************************ 00:07:01.756 START TEST thread_poller_perf 00:07:01.756 ************************************ 00:07:01.756 19:58:54 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.756 [2024-05-15 19:58:54.070283] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:01.756 [2024-05-15 19:58:54.070391] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4011713 ] 00:07:01.756 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.756 [2024-05-15 19:58:54.158275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.756 [2024-05-15 19:58:54.235379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.756 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:03.140 ====================================== 00:07:03.140 busy:2402324062 (cyc) 00:07:03.140 total_run_count: 3813000 00:07:03.140 tsc_hz: 2400000000 (cyc) 00:07:03.140 ====================================== 00:07:03.140 poller_cost: 630 (cyc), 262 (nsec) 00:07:03.140 00:07:03.140 real 0m1.241s 00:07:03.140 user 0m1.152s 00:07:03.140 sys 0m0.085s 00:07:03.140 19:58:55 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.140 19:58:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.140 ************************************ 00:07:03.140 END TEST thread_poller_perf 00:07:03.140 ************************************ 00:07:03.140 19:58:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:03.140 00:07:03.140 real 0m2.761s 00:07:03.140 user 0m2.392s 00:07:03.140 sys 0m0.368s 00:07:03.140 19:58:55 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.140 19:58:55 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.140 ************************************ 00:07:03.140 END TEST thread 00:07:03.140 ************************************ 00:07:03.140 19:58:55 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:03.140 19:58:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.140 19:58:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.140 19:58:55 -- common/autotest_common.sh@10 -- # set +x 00:07:03.140 ************************************ 00:07:03.140 START TEST accel 00:07:03.140 ************************************ 00:07:03.140 19:58:55 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:03.140 * Looking for test storage... 00:07:03.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:03.140 19:58:55 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:03.140 19:58:55 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:03.140 19:58:55 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:03.140 19:58:55 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=4012106 00:07:03.140 19:58:55 accel -- accel/accel.sh@63 -- # waitforlisten 4012106 00:07:03.140 19:58:55 accel -- common/autotest_common.sh@827 -- # '[' -z 4012106 ']' 00:07:03.140 19:58:55 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.140 19:58:55 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:03.140 19:58:55 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.140 19:58:55 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:03.140 19:58:55 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:03.140 19:58:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.140 19:58:55 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:03.140 19:58:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.140 19:58:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.140 19:58:55 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.140 19:58:55 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.140 19:58:55 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.140 19:58:55 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:03.140 19:58:55 accel -- accel/accel.sh@41 -- # jq -r . 00:07:03.140 [2024-05-15 19:58:55.547796] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:03.140 [2024-05-15 19:58:55.547862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012106 ] 00:07:03.140 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.140 [2024-05-15 19:58:55.634031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.401 [2024-05-15 19:58:55.705371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.971 19:58:56 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:03.971 19:58:56 accel -- common/autotest_common.sh@860 -- # return 0 00:07:03.972 19:58:56 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:03.972 19:58:56 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:03.972 19:58:56 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:03.972 19:58:56 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:03.972 19:58:56 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:03.972 19:58:56 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:03.972 19:58:56 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.972 19:58:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.972 19:58:56 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:03.972 19:58:56 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.972 19:58:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.972 19:58:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.972 19:58:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.972 19:58:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.972 19:58:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.972 19:58:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.972 19:58:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.972 19:58:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.972 19:58:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.972 19:58:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.972 19:58:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.972 19:58:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.972 19:58:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.972 19:58:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.972 19:58:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.972 19:58:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.972 19:58:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.972 19:58:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.972 19:58:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.972 19:58:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.972 19:58:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.972 19:58:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.972 19:58:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.972 19:58:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.972 19:58:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.972 19:58:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.972 19:58:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:03.972 19:58:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:03.972 19:58:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:03.972 19:58:56 accel -- accel/accel.sh@75 -- # killprocess 4012106 00:07:03.972 19:58:56 accel -- common/autotest_common.sh@946 -- # '[' -z 4012106 ']' 00:07:03.972 19:58:56 accel -- common/autotest_common.sh@950 -- # kill -0 4012106 00:07:03.972 19:58:56 accel -- common/autotest_common.sh@951 -- # uname 00:07:03.972 19:58:56 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:03.972 19:58:56 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4012106 00:07:04.232 19:58:56 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:04.232 19:58:56 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:04.232 19:58:56 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4012106' 00:07:04.232 killing process with pid 4012106 00:07:04.232 19:58:56 accel -- common/autotest_common.sh@965 -- # kill 4012106 00:07:04.232 19:58:56 accel -- common/autotest_common.sh@970 -- # wait 4012106 00:07:04.232 19:58:56 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:04.232 19:58:56 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:04.232 19:58:56 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:04.232 19:58:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.232 19:58:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.493 19:58:56 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:04.493 19:58:56 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:04.493 19:58:56 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:04.494 19:58:56 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.494 19:58:56 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.494 19:58:56 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.494 19:58:56 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.494 19:58:56 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.494 19:58:56 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:04.494 19:58:56 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:04.494 19:58:56 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.494 19:58:56 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:04.494 19:58:56 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:04.494 19:58:56 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:04.494 19:58:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.494 19:58:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.494 ************************************ 00:07:04.494 START TEST accel_missing_filename 00:07:04.494 ************************************ 00:07:04.494 19:58:56 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:04.494 19:58:56 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:04.494 19:58:56 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:04.494 19:58:56 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:04.494 19:58:56 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.494 19:58:56 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:04.494 19:58:56 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.494 19:58:56 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:04.494 19:58:56 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:04.494 19:58:56 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:04.494 19:58:56 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.494 19:58:56 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.494 19:58:56 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.494 19:58:56 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.494 19:58:56 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.494 19:58:56 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:04.494 19:58:56 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:04.494 [2024-05-15 19:58:56.903448] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:04.494 [2024-05-15 19:58:56.903549] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012367 ] 00:07:04.494 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.494 [2024-05-15 19:58:56.992059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.754 [2024-05-15 19:58:57.072149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.754 [2024-05-15 19:58:57.104882] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.754 [2024-05-15 19:58:57.142741] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:04.754 A filename is required. 00:07:04.754 19:58:57 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:04.754 19:58:57 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.754 19:58:57 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:04.754 19:58:57 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:04.754 19:58:57 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:04.754 19:58:57 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.754 00:07:04.754 real 0m0.325s 00:07:04.754 user 0m0.230s 00:07:04.754 sys 0m0.137s 00:07:04.754 19:58:57 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.754 19:58:57 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:04.754 ************************************ 00:07:04.754 END TEST accel_missing_filename 00:07:04.754 ************************************ 00:07:04.754 19:58:57 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:04.754 19:58:57 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:04.754 19:58:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.754 19:58:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.014 ************************************ 00:07:05.014 START TEST accel_compress_verify 00:07:05.014 ************************************ 00:07:05.014 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:05.014 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:05.014 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:05.014 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:05.014 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.014 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:05.014 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.014 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:05.014 19:58:57 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:05.014 19:58:57 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:05.014 19:58:57 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.014 19:58:57 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.014 19:58:57 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.014 19:58:57 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.014 19:58:57 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.014 19:58:57 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:05.014 19:58:57 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:05.014 [2024-05-15 19:58:57.303299] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:05.014 [2024-05-15 19:58:57.303398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012510 ] 00:07:05.014 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.014 [2024-05-15 19:58:57.391555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.014 [2024-05-15 19:58:57.470151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.014 [2024-05-15 19:58:57.502778] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.273 [2024-05-15 19:58:57.540429] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:05.273 00:07:05.273 Compression does not support the verify option, aborting. 00:07:05.273 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:05.273 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.273 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:05.274 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:05.274 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:05.274 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.274 00:07:05.274 real 0m0.321s 00:07:05.274 user 0m0.239s 00:07:05.274 sys 0m0.125s 00:07:05.274 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.274 19:58:57 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:05.274 ************************************ 00:07:05.274 END TEST accel_compress_verify 00:07:05.274 ************************************ 00:07:05.274 19:58:57 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:05.274 19:58:57 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:05.274 19:58:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.274 19:58:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.274 ************************************ 00:07:05.274 START TEST accel_wrong_workload 00:07:05.274 ************************************ 00:07:05.274 19:58:57 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:05.274 19:58:57 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:05.274 19:58:57 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:05.274 19:58:57 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:05.274 19:58:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.274 19:58:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:05.274 19:58:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.274 19:58:57 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:05.274 19:58:57 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:05.274 19:58:57 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:05.274 19:58:57 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.274 19:58:57 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.274 19:58:57 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.274 19:58:57 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.274 19:58:57 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.274 19:58:57 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:05.274 19:58:57 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:05.274 Unsupported workload type: foobar 00:07:05.274 [2024-05-15 19:58:57.698284] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:05.274 accel_perf options: 00:07:05.274 [-h help message] 00:07:05.274 [-q queue depth per core] 00:07:05.274 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:05.274 [-T number of threads per core 00:07:05.274 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:05.274 [-t time in seconds] 00:07:05.274 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:05.274 [ dif_verify, , dif_generate, dif_generate_copy 00:07:05.274 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:05.274 [-l for compress/decompress workloads, name of uncompressed input file 00:07:05.274 [-S for crc32c workload, use this seed value (default 0) 00:07:05.274 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:05.274 [-f for fill workload, use this BYTE value (default 255) 00:07:05.274 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:05.274 [-y verify result if this switch is on] 00:07:05.274 [-a tasks to allocate per core (default: same value as -q)] 00:07:05.274 Can be used to spread operations across a wider range of memory. 00:07:05.274 19:58:57 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:05.274 19:58:57 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.274 19:58:57 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.274 19:58:57 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.274 00:07:05.274 real 0m0.035s 00:07:05.274 user 0m0.021s 00:07:05.274 sys 0m0.014s 00:07:05.274 19:58:57 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.274 19:58:57 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:05.274 ************************************ 00:07:05.274 END TEST accel_wrong_workload 00:07:05.274 ************************************ 00:07:05.274 Error: writing output failed: Broken pipe 00:07:05.274 19:58:57 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:05.274 19:58:57 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:05.274 19:58:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.274 19:58:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.535 ************************************ 00:07:05.535 START TEST accel_negative_buffers 00:07:05.535 ************************************ 00:07:05.535 19:58:57 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:05.535 19:58:57 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:05.535 19:58:57 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:05.535 19:58:57 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:05.535 19:58:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.535 19:58:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:05.535 19:58:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.535 19:58:57 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:05.535 19:58:57 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:05.535 19:58:57 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:05.535 19:58:57 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.535 19:58:57 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.535 19:58:57 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.535 19:58:57 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.535 19:58:57 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.535 19:58:57 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:05.535 19:58:57 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:05.535 -x option must be non-negative. 00:07:05.535 [2024-05-15 19:58:57.813970] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:05.535 accel_perf options: 00:07:05.535 [-h help message] 00:07:05.535 [-q queue depth per core] 00:07:05.535 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:05.535 [-T number of threads per core 00:07:05.535 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:05.535 [-t time in seconds] 00:07:05.535 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:05.535 [ dif_verify, , dif_generate, dif_generate_copy 00:07:05.535 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:05.535 [-l for compress/decompress workloads, name of uncompressed input file 00:07:05.535 [-S for crc32c workload, use this seed value (default 0) 00:07:05.535 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:05.535 [-f for fill workload, use this BYTE value (default 255) 00:07:05.535 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:05.535 [-y verify result if this switch is on] 00:07:05.535 [-a tasks to allocate per core (default: same value as -q)] 00:07:05.535 Can be used to spread operations across a wider range of memory. 00:07:05.535 19:58:57 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:05.535 19:58:57 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.535 19:58:57 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:05.535 19:58:57 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.535 00:07:05.535 real 0m0.034s 00:07:05.535 user 0m0.018s 00:07:05.535 sys 0m0.015s 00:07:05.535 19:58:57 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.535 19:58:57 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:05.535 ************************************ 00:07:05.535 END TEST accel_negative_buffers 00:07:05.535 ************************************ 00:07:05.535 Error: writing output failed: Broken pipe 00:07:05.535 19:58:57 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:05.535 19:58:57 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:05.535 19:58:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.535 19:58:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.535 ************************************ 00:07:05.535 START TEST accel_crc32c 00:07:05.535 ************************************ 00:07:05.535 19:58:57 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:05.535 19:58:57 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:05.535 19:58:57 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:05.535 19:58:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.535 19:58:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.535 19:58:57 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:05.535 19:58:57 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:05.535 19:58:57 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:05.535 19:58:57 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.535 19:58:57 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.535 19:58:57 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.535 19:58:57 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.535 19:58:57 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.535 19:58:57 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:05.535 19:58:57 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:05.535 [2024-05-15 19:58:57.906299] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:05.536 [2024-05-15 19:58:57.906364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012570 ] 00:07:05.536 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.536 [2024-05-15 19:58:57.992875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.797 [2024-05-15 19:58:58.065970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.797 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.798 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.798 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.798 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.798 19:58:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.798 19:58:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.798 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.798 19:58:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:06.739 19:58:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.739 00:07:06.739 real 0m1.317s 00:07:06.739 user 0m1.212s 00:07:06.739 sys 0m0.116s 00:07:06.739 19:58:59 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.739 19:58:59 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:06.739 ************************************ 00:07:06.739 END TEST accel_crc32c 00:07:06.739 ************************************ 00:07:06.739 19:58:59 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:06.739 19:58:59 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:06.739 19:58:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.739 19:58:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.000 ************************************ 00:07:07.000 START TEST accel_crc32c_C2 00:07:07.000 ************************************ 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:07.000 [2024-05-15 19:58:59.300409] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:07.000 [2024-05-15 19:58:59.300516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4012925 ] 00:07:07.000 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.000 [2024-05-15 19:58:59.393106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.000 [2024-05-15 19:58:59.459748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.000 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.261 19:58:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.202 00:07:08.202 real 0m1.317s 00:07:08.202 user 0m1.197s 00:07:08.202 sys 0m0.131s 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:08.202 19:59:00 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:08.202 ************************************ 00:07:08.202 END TEST accel_crc32c_C2 00:07:08.202 ************************************ 00:07:08.202 19:59:00 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:08.202 19:59:00 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:08.202 19:59:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.202 19:59:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.202 ************************************ 00:07:08.202 START TEST accel_copy 00:07:08.202 ************************************ 00:07:08.202 19:59:00 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:08.202 19:59:00 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:08.202 19:59:00 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:08.202 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.202 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.202 19:59:00 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:08.202 19:59:00 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:08.202 19:59:00 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.202 19:59:00 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.202 19:59:00 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:08.202 19:59:00 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.202 19:59:00 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.202 19:59:00 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.202 19:59:00 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:08.202 19:59:00 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:08.202 [2024-05-15 19:59:00.690222] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:08.202 [2024-05-15 19:59:00.690321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013275 ] 00:07:08.463 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.463 [2024-05-15 19:59:00.777149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.463 [2024-05-15 19:59:00.846768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.463 19:59:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:09.849 19:59:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.849 00:07:09.849 real 0m1.314s 00:07:09.849 user 0m1.202s 00:07:09.849 sys 0m0.122s 00:07:09.849 19:59:01 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.849 19:59:01 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:09.849 ************************************ 00:07:09.849 END TEST accel_copy 00:07:09.849 ************************************ 00:07:09.849 19:59:02 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.849 19:59:02 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:09.849 19:59:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.849 19:59:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.849 ************************************ 00:07:09.849 START TEST accel_fill 00:07:09.849 ************************************ 00:07:09.849 19:59:02 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.849 19:59:02 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:09.849 19:59:02 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:09.849 19:59:02 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.849 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.849 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:09.850 [2024-05-15 19:59:02.065245] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:09.850 [2024-05-15 19:59:02.065291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013546 ] 00:07:09.850 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.850 [2024-05-15 19:59:02.148687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.850 [2024-05-15 19:59:02.213935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:09.850 19:59:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:11.236 19:59:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.236 00:07:11.236 real 0m1.292s 00:07:11.236 user 0m1.186s 00:07:11.236 sys 0m0.117s 00:07:11.236 19:59:03 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:11.236 19:59:03 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:11.236 ************************************ 00:07:11.236 END TEST accel_fill 00:07:11.236 ************************************ 00:07:11.236 19:59:03 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:11.236 19:59:03 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:11.236 19:59:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.236 19:59:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.236 ************************************ 00:07:11.236 START TEST accel_copy_crc32c 00:07:11.236 ************************************ 00:07:11.236 19:59:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:11.236 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:11.236 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:11.236 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.236 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.236 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:11.236 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:11.236 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:11.236 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.236 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.236 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.236 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.236 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.236 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:11.236 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:11.236 [2024-05-15 19:59:03.428482] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:11.236 [2024-05-15 19:59:03.428543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013733 ] 00:07:11.236 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.236 [2024-05-15 19:59:03.513688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.236 [2024-05-15 19:59:03.582028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:11.237 19:59:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.624 00:07:12.624 real 0m1.310s 00:07:12.624 user 0m1.195s 00:07:12.624 sys 0m0.127s 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.624 19:59:04 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:12.624 ************************************ 00:07:12.624 END TEST accel_copy_crc32c 00:07:12.624 ************************************ 00:07:12.624 19:59:04 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:12.624 19:59:04 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:12.624 19:59:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.624 19:59:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.624 ************************************ 00:07:12.624 START TEST accel_copy_crc32c_C2 00:07:12.624 ************************************ 00:07:12.624 19:59:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:12.624 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.624 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:12.624 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:12.624 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.624 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.624 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:12.624 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.624 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.624 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.624 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.624 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.624 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.624 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:12.624 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:12.624 [2024-05-15 19:59:04.782498] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:12.624 [2024-05-15 19:59:04.782541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014015 ] 00:07:12.625 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.625 [2024-05-15 19:59:04.863432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.625 [2024-05-15 19:59:04.927982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:12.625 19:59:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.567 00:07:13.567 real 0m1.288s 00:07:13.567 user 0m1.186s 00:07:13.567 sys 0m0.113s 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.567 19:59:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:13.567 ************************************ 00:07:13.567 END TEST accel_copy_crc32c_C2 00:07:13.567 ************************************ 00:07:13.829 19:59:06 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:13.829 19:59:06 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:13.829 19:59:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.829 19:59:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.829 ************************************ 00:07:13.829 START TEST accel_dualcast 00:07:13.829 ************************************ 00:07:13.829 19:59:06 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:13.829 19:59:06 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:13.829 19:59:06 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:13.829 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:13.829 19:59:06 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:13.829 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:13.829 19:59:06 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:13.829 19:59:06 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:13.829 19:59:06 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.829 19:59:06 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.829 19:59:06 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.829 19:59:06 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.829 19:59:06 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.829 19:59:06 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:13.829 19:59:06 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:13.829 [2024-05-15 19:59:06.161621] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:13.829 [2024-05-15 19:59:06.161710] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014364 ] 00:07:13.829 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.829 [2024-05-15 19:59:06.244968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.829 [2024-05-15 19:59:06.313755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.090 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.090 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.090 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.090 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.090 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.090 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.090 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.090 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.090 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:14.090 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:14.091 19:59:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.034 19:59:07 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:15.035 19:59:07 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.035 00:07:15.035 real 0m1.307s 00:07:15.035 user 0m1.198s 00:07:15.035 sys 0m0.120s 00:07:15.035 19:59:07 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:15.035 19:59:07 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:15.035 ************************************ 00:07:15.035 END TEST accel_dualcast 00:07:15.035 ************************************ 00:07:15.035 19:59:07 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:15.035 19:59:07 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:15.035 19:59:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.035 19:59:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.035 ************************************ 00:07:15.035 START TEST accel_compare 00:07:15.035 ************************************ 00:07:15.035 19:59:07 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:15.035 19:59:07 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:15.035 19:59:07 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:15.035 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.035 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.035 19:59:07 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:15.035 19:59:07 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:15.035 19:59:07 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:15.035 19:59:07 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.035 19:59:07 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.035 19:59:07 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.035 19:59:07 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.035 19:59:07 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.035 19:59:07 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:15.035 19:59:07 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:15.297 [2024-05-15 19:59:07.553047] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:15.297 [2024-05-15 19:59:07.553109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014717 ] 00:07:15.297 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.297 [2024-05-15 19:59:07.638970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.297 [2024-05-15 19:59:07.704550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:15.297 19:59:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:16.684 19:59:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.684 00:07:16.684 real 0m1.309s 00:07:16.684 user 0m1.198s 00:07:16.684 sys 0m0.121s 00:07:16.684 19:59:08 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.684 19:59:08 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:16.684 ************************************ 00:07:16.684 END TEST accel_compare 00:07:16.684 ************************************ 00:07:16.684 19:59:08 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:16.684 19:59:08 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:16.684 19:59:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.684 19:59:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.684 ************************************ 00:07:16.684 START TEST accel_xor 00:07:16.684 ************************************ 00:07:16.684 19:59:08 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:16.684 19:59:08 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:16.684 19:59:08 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:16.684 19:59:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:08 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:16.684 19:59:08 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:16.684 19:59:08 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:16.684 19:59:08 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.684 19:59:08 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.684 19:59:08 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.684 19:59:08 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.684 19:59:08 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.684 19:59:08 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:16.684 19:59:08 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:16.684 [2024-05-15 19:59:08.940012] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:16.684 [2024-05-15 19:59:08.940099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015029 ] 00:07:16.684 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.684 [2024-05-15 19:59:09.018667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.684 [2024-05-15 19:59:09.093208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:16.684 19:59:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.071 00:07:18.071 real 0m1.312s 00:07:18.071 user 0m1.198s 00:07:18.071 sys 0m0.125s 00:07:18.071 19:59:10 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:18.071 19:59:10 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:18.071 ************************************ 00:07:18.071 END TEST accel_xor 00:07:18.071 ************************************ 00:07:18.071 19:59:10 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:18.071 19:59:10 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:18.071 19:59:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:18.071 19:59:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.071 ************************************ 00:07:18.071 START TEST accel_xor 00:07:18.071 ************************************ 00:07:18.071 19:59:10 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:18.071 [2024-05-15 19:59:10.327646] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:18.071 [2024-05-15 19:59:10.327708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015261 ] 00:07:18.071 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.071 [2024-05-15 19:59:10.397521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.071 [2024-05-15 19:59:10.464390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.071 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.072 19:59:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:19.458 19:59:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.458 00:07:19.458 real 0m1.293s 00:07:19.458 user 0m1.192s 00:07:19.458 sys 0m0.112s 00:07:19.458 19:59:11 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.459 19:59:11 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:19.459 ************************************ 00:07:19.459 END TEST accel_xor 00:07:19.459 ************************************ 00:07:19.459 19:59:11 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:19.459 19:59:11 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:19.459 19:59:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.459 19:59:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.459 ************************************ 00:07:19.459 START TEST accel_dif_verify 00:07:19.459 ************************************ 00:07:19.459 19:59:11 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:19.459 [2024-05-15 19:59:11.699875] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:19.459 [2024-05-15 19:59:11.699941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015476 ] 00:07:19.459 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.459 [2024-05-15 19:59:11.786768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.459 [2024-05-15 19:59:11.866642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:19.459 19:59:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:20.847 19:59:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.847 00:07:20.847 real 0m1.326s 00:07:20.847 user 0m1.212s 00:07:20.847 sys 0m0.126s 00:07:20.847 19:59:12 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.847 19:59:12 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:20.847 ************************************ 00:07:20.847 END TEST accel_dif_verify 00:07:20.847 ************************************ 00:07:20.847 19:59:13 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:20.847 19:59:13 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:20.847 19:59:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.847 19:59:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.847 ************************************ 00:07:20.847 START TEST accel_dif_generate 00:07:20.847 ************************************ 00:07:20.847 19:59:13 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:20.847 [2024-05-15 19:59:13.105953] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:20.847 [2024-05-15 19:59:13.106042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015819 ] 00:07:20.847 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.847 [2024-05-15 19:59:13.192093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.847 [2024-05-15 19:59:13.258496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:20.847 19:59:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:20.848 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:20.848 19:59:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:22.313 19:59:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.313 00:07:22.313 real 0m1.311s 00:07:22.313 user 0m1.205s 00:07:22.313 sys 0m0.118s 00:07:22.313 19:59:14 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.313 19:59:14 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:22.313 ************************************ 00:07:22.313 END TEST accel_dif_generate 00:07:22.313 ************************************ 00:07:22.313 19:59:14 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:22.313 19:59:14 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:22.313 19:59:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.313 19:59:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.313 ************************************ 00:07:22.313 START TEST accel_dif_generate_copy 00:07:22.313 ************************************ 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:22.313 [2024-05-15 19:59:14.478200] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:22.313 [2024-05-15 19:59:14.478262] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4016169 ] 00:07:22.313 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.313 [2024-05-15 19:59:14.562744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.313 [2024-05-15 19:59:14.630805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.313 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:22.314 19:59:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.276 00:07:23.276 real 0m1.309s 00:07:23.276 user 0m1.202s 00:07:23.276 sys 0m0.118s 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.276 19:59:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:23.276 ************************************ 00:07:23.276 END TEST accel_dif_generate_copy 00:07:23.276 ************************************ 00:07:23.537 19:59:15 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:23.537 19:59:15 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.537 19:59:15 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:23.537 19:59:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.537 19:59:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.537 ************************************ 00:07:23.537 START TEST accel_comp 00:07:23.537 ************************************ 00:07:23.537 19:59:15 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.537 19:59:15 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:23.537 19:59:15 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:23.537 19:59:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.537 19:59:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.537 19:59:15 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.537 19:59:15 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:23.537 19:59:15 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.537 19:59:15 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.537 19:59:15 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.537 19:59:15 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.537 19:59:15 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.537 19:59:15 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.537 19:59:15 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:23.537 19:59:15 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:23.537 [2024-05-15 19:59:15.859871] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:23.537 [2024-05-15 19:59:15.859932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4016529 ] 00:07:23.537 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.537 [2024-05-15 19:59:15.925500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.537 [2024-05-15 19:59:15.989656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.537 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.538 19:59:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:24.926 19:59:17 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.926 00:07:24.926 real 0m1.289s 00:07:24.926 user 0m1.199s 00:07:24.926 sys 0m0.101s 00:07:24.926 19:59:17 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.926 19:59:17 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:24.926 ************************************ 00:07:24.926 END TEST accel_comp 00:07:24.926 ************************************ 00:07:24.926 19:59:17 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.926 19:59:17 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:24.926 19:59:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.926 19:59:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.926 ************************************ 00:07:24.926 START TEST accel_decomp 00:07:24.926 ************************************ 00:07:24.926 19:59:17 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.926 19:59:17 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:24.926 19:59:17 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:24.926 19:59:17 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.926 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.926 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.926 19:59:17 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:24.926 19:59:17 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:24.927 [2024-05-15 19:59:17.207259] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:24.927 [2024-05-15 19:59:17.207308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4016755 ] 00:07:24.927 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.927 [2024-05-15 19:59:17.289779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.927 [2024-05-15 19:59:17.354659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.927 19:59:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.314 19:59:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.314 00:07:26.314 real 0m1.293s 00:07:26.314 user 0m1.196s 00:07:26.314 sys 0m0.109s 00:07:26.314 19:59:18 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.314 19:59:18 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:26.314 ************************************ 00:07:26.314 END TEST accel_decomp 00:07:26.314 ************************************ 00:07:26.314 19:59:18 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:26.314 19:59:18 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:26.314 19:59:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.314 19:59:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.314 ************************************ 00:07:26.314 START TEST accel_decmop_full 00:07:26.314 ************************************ 00:07:26.314 19:59:18 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:26.314 [2024-05-15 19:59:18.586965] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:26.314 [2024-05-15 19:59:18.587055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4016937 ] 00:07:26.314 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.314 [2024-05-15 19:59:18.674992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.314 [2024-05-15 19:59:18.744086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.314 19:59:18 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:27.702 19:59:19 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.702 00:07:27.702 real 0m1.332s 00:07:27.702 user 0m1.217s 00:07:27.702 sys 0m0.127s 00:07:27.702 19:59:19 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.702 19:59:19 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:27.702 ************************************ 00:07:27.702 END TEST accel_decmop_full 00:07:27.703 ************************************ 00:07:27.703 19:59:19 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.703 19:59:19 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:27.703 19:59:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.703 19:59:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.703 ************************************ 00:07:27.703 START TEST accel_decomp_mcore 00:07:27.703 ************************************ 00:07:27.703 19:59:19 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.703 19:59:19 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:27.703 19:59:19 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:27.703 19:59:19 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.703 19:59:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:19 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.703 19:59:19 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:27.703 19:59:19 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.703 19:59:19 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.703 19:59:19 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.703 19:59:19 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.703 19:59:19 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.703 19:59:19 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:27.703 19:59:19 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:27.703 [2024-05-15 19:59:19.976394] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:27.703 [2024-05-15 19:59:19.976438] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4017268 ] 00:07:27.703 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.703 [2024-05-15 19:59:20.062220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.703 [2024-05-15 19:59:20.131190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.703 [2024-05-15 19:59:20.131329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.703 [2024-05-15 19:59:20.131432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.703 [2024-05-15 19:59:20.131567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.703 19:59:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.092 00:07:29.092 real 0m1.307s 00:07:29.092 user 0m4.438s 00:07:29.092 sys 0m0.115s 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.092 19:59:21 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:29.092 ************************************ 00:07:29.092 END TEST accel_decomp_mcore 00:07:29.092 ************************************ 00:07:29.092 19:59:21 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.092 19:59:21 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:29.092 19:59:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.092 19:59:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.092 ************************************ 00:07:29.092 START TEST accel_decomp_full_mcore 00:07:29.092 ************************************ 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:29.092 [2024-05-15 19:59:21.382306] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:29.092 [2024-05-15 19:59:21.382442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4017621 ] 00:07:29.092 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.092 [2024-05-15 19:59:21.478763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.092 [2024-05-15 19:59:21.546409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.092 [2024-05-15 19:59:21.546613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.092 [2024-05-15 19:59:21.546776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.092 [2024-05-15 19:59:21.546776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.092 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.093 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.354 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:29.354 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.354 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.354 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.355 19:59:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.301 00:07:30.301 real 0m1.346s 00:07:30.301 user 0m4.496s 00:07:30.301 sys 0m0.133s 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.301 19:59:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:30.301 ************************************ 00:07:30.301 END TEST accel_decomp_full_mcore 00:07:30.301 ************************************ 00:07:30.301 19:59:22 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.301 19:59:22 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:30.301 19:59:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.301 19:59:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.301 ************************************ 00:07:30.301 START TEST accel_decomp_mthread 00:07:30.301 ************************************ 00:07:30.301 19:59:22 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.301 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:30.301 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:30.301 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.301 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.301 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.301 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:30.301 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:30.301 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.301 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.301 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.301 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.301 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.301 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:30.301 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:30.592 [2024-05-15 19:59:22.805326] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:30.593 [2024-05-15 19:59:22.805416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4017977 ] 00:07:30.593 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.593 [2024-05-15 19:59:22.890568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.593 [2024-05-15 19:59:22.960157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.593 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.594 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.594 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.594 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.594 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.594 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.594 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.594 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.594 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.594 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:30.594 19:59:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.594 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:30.594 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.594 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.594 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.594 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.594 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.594 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.594 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.594 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.594 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.594 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.594 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:30.594 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.595 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.596 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.596 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:30.596 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.596 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.596 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.596 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.596 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.596 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.596 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.596 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:30.596 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:30.596 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.596 19:59:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.989 00:07:31.989 real 0m1.319s 00:07:31.989 user 0m1.198s 00:07:31.989 sys 0m0.132s 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.989 19:59:24 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:31.989 ************************************ 00:07:31.989 END TEST accel_decomp_mthread 00:07:31.989 ************************************ 00:07:31.989 19:59:24 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.989 19:59:24 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:31.989 19:59:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.989 19:59:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.989 ************************************ 00:07:31.989 START TEST accel_decomp_full_mthread 00:07:31.989 ************************************ 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:31.989 [2024-05-15 19:59:24.182547] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:31.989 [2024-05-15 19:59:24.182605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4018301 ] 00:07:31.989 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.989 [2024-05-15 19:59:24.266942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.989 [2024-05-15 19:59:24.332702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.989 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.990 19:59:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.373 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.374 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.374 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:33.374 19:59:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.374 00:07:33.374 real 0m1.341s 00:07:33.374 user 0m1.235s 00:07:33.374 sys 0m0.118s 00:07:33.374 19:59:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.374 19:59:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:33.374 ************************************ 00:07:33.374 END TEST accel_decomp_full_mthread 00:07:33.374 ************************************ 00:07:33.374 19:59:25 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:33.374 19:59:25 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:33.374 19:59:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.374 19:59:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.374 19:59:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.374 19:59:25 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:33.374 19:59:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.374 19:59:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.374 19:59:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:33.374 19:59:25 accel -- accel/accel.sh@41 -- # jq -r . 00:07:33.374 19:59:25 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:33.374 19:59:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.374 19:59:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.374 ************************************ 00:07:33.374 START TEST accel_dif_functional_tests 00:07:33.374 ************************************ 00:07:33.374 19:59:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:33.374 [2024-05-15 19:59:25.621288] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:33.374 [2024-05-15 19:59:25.621342] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4018505 ] 00:07:33.374 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.374 [2024-05-15 19:59:25.685779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.374 [2024-05-15 19:59:25.752290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.374 [2024-05-15 19:59:25.752433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.374 [2024-05-15 19:59:25.752436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.374 00:07:33.374 00:07:33.374 CUnit - A unit testing framework for C - Version 2.1-3 00:07:33.374 http://cunit.sourceforge.net/ 00:07:33.374 00:07:33.374 00:07:33.374 Suite: accel_dif 00:07:33.374 Test: verify: DIF generated, GUARD check ...passed 00:07:33.374 Test: verify: DIF generated, APPTAG check ...passed 00:07:33.374 Test: verify: DIF generated, REFTAG check ...passed 00:07:33.374 Test: verify: DIF not generated, GUARD check ...[2024-05-15 19:59:25.808077] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:33.374 [2024-05-15 19:59:25.808114] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:33.374 passed 00:07:33.374 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 19:59:25.808145] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:33.374 [2024-05-15 19:59:25.808159] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:33.374 passed 00:07:33.374 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 19:59:25.808174] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:33.374 [2024-05-15 19:59:25.808189] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:33.374 passed 00:07:33.374 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:33.374 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 19:59:25.808231] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:33.374 passed 00:07:33.374 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:33.374 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:33.374 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:33.374 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 19:59:25.808350] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:33.374 passed 00:07:33.374 Test: generate copy: DIF generated, GUARD check ...passed 00:07:33.374 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:33.374 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:33.374 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:33.374 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:33.374 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:33.374 Test: generate copy: iovecs-len validate ...[2024-05-15 19:59:25.808532] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:33.374 passed 00:07:33.374 Test: generate copy: buffer alignment validate ...passed 00:07:33.374 00:07:33.374 Run Summary: Type Total Ran Passed Failed Inactive 00:07:33.374 suites 1 1 n/a 0 0 00:07:33.374 tests 20 20 20 0 0 00:07:33.374 asserts 204 204 204 0 n/a 00:07:33.374 00:07:33.374 Elapsed time = 0.002 seconds 00:07:33.636 00:07:33.636 real 0m0.349s 00:07:33.636 user 0m0.441s 00:07:33.636 sys 0m0.129s 00:07:33.636 19:59:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.636 19:59:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:33.636 ************************************ 00:07:33.636 END TEST accel_dif_functional_tests 00:07:33.636 ************************************ 00:07:33.636 00:07:33.636 real 0m30.570s 00:07:33.636 user 0m33.837s 00:07:33.636 sys 0m4.456s 00:07:33.636 19:59:25 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.636 19:59:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.636 ************************************ 00:07:33.636 END TEST accel 00:07:33.636 ************************************ 00:07:33.637 19:59:26 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:33.637 19:59:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:33.637 19:59:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.637 19:59:26 -- common/autotest_common.sh@10 -- # set +x 00:07:33.637 ************************************ 00:07:33.637 START TEST accel_rpc 00:07:33.637 ************************************ 00:07:33.637 19:59:26 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:33.637 * Looking for test storage... 00:07:33.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:33.898 19:59:26 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:33.898 19:59:26 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=4018749 00:07:33.898 19:59:26 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 4018749 00:07:33.898 19:59:26 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:33.898 19:59:26 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 4018749 ']' 00:07:33.898 19:59:26 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.898 19:59:26 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:33.898 19:59:26 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.898 19:59:26 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:33.898 19:59:26 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.898 [2024-05-15 19:59:26.198138] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:33.898 [2024-05-15 19:59:26.198206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4018749 ] 00:07:33.898 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.898 [2024-05-15 19:59:26.286514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.898 [2024-05-15 19:59:26.357679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.841 19:59:27 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:34.841 19:59:27 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:34.841 19:59:27 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:34.841 19:59:27 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:34.841 19:59:27 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:34.841 19:59:27 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:34.841 19:59:27 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:34.841 19:59:27 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:34.841 19:59:27 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.841 19:59:27 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.841 ************************************ 00:07:34.841 START TEST accel_assign_opcode 00:07:34.841 ************************************ 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:34.841 [2024-05-15 19:59:27.099798] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:34.841 [2024-05-15 19:59:27.111826] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.841 software 00:07:34.841 00:07:34.841 real 0m0.212s 00:07:34.841 user 0m0.049s 00:07:34.841 sys 0m0.010s 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.841 19:59:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:34.841 ************************************ 00:07:34.841 END TEST accel_assign_opcode 00:07:34.841 ************************************ 00:07:34.841 19:59:27 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 4018749 00:07:35.103 19:59:27 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 4018749 ']' 00:07:35.103 19:59:27 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 4018749 00:07:35.103 19:59:27 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:35.103 19:59:27 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:35.103 19:59:27 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4018749 00:07:35.103 19:59:27 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:35.103 19:59:27 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:35.103 19:59:27 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4018749' 00:07:35.103 killing process with pid 4018749 00:07:35.103 19:59:27 accel_rpc -- common/autotest_common.sh@965 -- # kill 4018749 00:07:35.103 19:59:27 accel_rpc -- common/autotest_common.sh@970 -- # wait 4018749 00:07:35.367 00:07:35.367 real 0m1.562s 00:07:35.367 user 0m1.721s 00:07:35.367 sys 0m0.423s 00:07:35.367 19:59:27 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.367 19:59:27 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.367 ************************************ 00:07:35.367 END TEST accel_rpc 00:07:35.367 ************************************ 00:07:35.367 19:59:27 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:35.367 19:59:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:35.367 19:59:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.367 19:59:27 -- common/autotest_common.sh@10 -- # set +x 00:07:35.367 ************************************ 00:07:35.367 START TEST app_cmdline 00:07:35.367 ************************************ 00:07:35.367 19:59:27 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:35.367 * Looking for test storage... 00:07:35.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:35.367 19:59:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:35.367 19:59:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4019162 00:07:35.367 19:59:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4019162 00:07:35.367 19:59:27 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:35.367 19:59:27 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 4019162 ']' 00:07:35.367 19:59:27 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.367 19:59:27 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:35.367 19:59:27 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.367 19:59:27 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:35.367 19:59:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:35.367 [2024-05-15 19:59:27.844095] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:07:35.367 [2024-05-15 19:59:27.844161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4019162 ] 00:07:35.629 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.629 [2024-05-15 19:59:27.928422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.629 [2024-05-15 19:59:27.993425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.201 19:59:28 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:36.201 19:59:28 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:36.201 19:59:28 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:36.463 { 00:07:36.463 "version": "SPDK v24.05-pre git sha1 40b11d962", 00:07:36.463 "fields": { 00:07:36.463 "major": 24, 00:07:36.463 "minor": 5, 00:07:36.463 "patch": 0, 00:07:36.463 "suffix": "-pre", 00:07:36.463 "commit": "40b11d962" 00:07:36.463 } 00:07:36.463 } 00:07:36.463 19:59:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:36.463 19:59:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:36.463 19:59:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:36.463 19:59:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:36.463 19:59:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:36.463 19:59:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:36.463 19:59:28 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.463 19:59:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:36.463 19:59:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:36.463 19:59:28 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.463 19:59:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:36.463 19:59:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:36.463 19:59:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:36.463 19:59:28 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:36.463 19:59:28 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:36.463 19:59:28 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.463 19:59:28 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.463 19:59:28 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.463 19:59:28 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.463 19:59:28 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.463 19:59:28 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.463 19:59:28 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:36.463 19:59:28 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:36.463 19:59:28 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:36.725 request: 00:07:36.725 { 00:07:36.725 "method": "env_dpdk_get_mem_stats", 00:07:36.725 "req_id": 1 00:07:36.725 } 00:07:36.725 Got JSON-RPC error response 00:07:36.725 response: 00:07:36.725 { 00:07:36.725 "code": -32601, 00:07:36.725 "message": "Method not found" 00:07:36.725 } 00:07:36.725 19:59:29 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:36.725 19:59:29 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:36.725 19:59:29 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:36.725 19:59:29 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:36.725 19:59:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4019162 00:07:36.725 19:59:29 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 4019162 ']' 00:07:36.725 19:59:29 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 4019162 00:07:36.725 19:59:29 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:36.725 19:59:29 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:36.725 19:59:29 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4019162 00:07:36.725 19:59:29 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:36.725 19:59:29 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:36.725 19:59:29 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4019162' 00:07:36.725 killing process with pid 4019162 00:07:36.725 19:59:29 app_cmdline -- common/autotest_common.sh@965 -- # kill 4019162 00:07:36.725 19:59:29 app_cmdline -- common/autotest_common.sh@970 -- # wait 4019162 00:07:36.987 00:07:36.987 real 0m1.655s 00:07:36.987 user 0m2.064s 00:07:36.987 sys 0m0.414s 00:07:36.987 19:59:29 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:36.987 19:59:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:36.987 ************************************ 00:07:36.987 END TEST app_cmdline 00:07:36.987 ************************************ 00:07:36.987 19:59:29 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:36.987 19:59:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:36.987 19:59:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.987 19:59:29 -- common/autotest_common.sh@10 -- # set +x 00:07:36.987 ************************************ 00:07:36.987 START TEST version 00:07:36.987 ************************************ 00:07:36.987 19:59:29 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:37.249 * Looking for test storage... 00:07:37.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:37.249 19:59:29 version -- app/version.sh@17 -- # get_header_version major 00:07:37.249 19:59:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:37.249 19:59:29 version -- app/version.sh@14 -- # cut -f2 00:07:37.249 19:59:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:37.249 19:59:29 version -- app/version.sh@17 -- # major=24 00:07:37.249 19:59:29 version -- app/version.sh@18 -- # get_header_version minor 00:07:37.249 19:59:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:37.249 19:59:29 version -- app/version.sh@14 -- # cut -f2 00:07:37.249 19:59:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:37.249 19:59:29 version -- app/version.sh@18 -- # minor=5 00:07:37.249 19:59:29 version -- app/version.sh@19 -- # get_header_version patch 00:07:37.249 19:59:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:37.249 19:59:29 version -- app/version.sh@14 -- # cut -f2 00:07:37.249 19:59:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:37.249 19:59:29 version -- app/version.sh@19 -- # patch=0 00:07:37.249 19:59:29 version -- app/version.sh@20 -- # get_header_version suffix 00:07:37.249 19:59:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:37.249 19:59:29 version -- app/version.sh@14 -- # cut -f2 00:07:37.249 19:59:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:37.249 19:59:29 version -- app/version.sh@20 -- # suffix=-pre 00:07:37.249 19:59:29 version -- app/version.sh@22 -- # version=24.5 00:07:37.249 19:59:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:37.249 19:59:29 version -- app/version.sh@28 -- # version=24.5rc0 00:07:37.249 19:59:29 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:37.249 19:59:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:37.249 19:59:29 version -- app/version.sh@30 -- # py_version=24.5rc0 00:07:37.249 19:59:29 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:37.249 00:07:37.249 real 0m0.173s 00:07:37.249 user 0m0.087s 00:07:37.249 sys 0m0.121s 00:07:37.249 19:59:29 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.249 19:59:29 version -- common/autotest_common.sh@10 -- # set +x 00:07:37.249 ************************************ 00:07:37.249 END TEST version 00:07:37.249 ************************************ 00:07:37.249 19:59:29 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:37.249 19:59:29 -- spdk/autotest.sh@194 -- # uname -s 00:07:37.249 19:59:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:37.249 19:59:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:37.249 19:59:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:37.249 19:59:29 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:37.249 19:59:29 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:37.249 19:59:29 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:37.249 19:59:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.249 19:59:29 -- common/autotest_common.sh@10 -- # set +x 00:07:37.249 19:59:29 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:37.249 19:59:29 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:37.249 19:59:29 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:07:37.249 19:59:29 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:07:37.249 19:59:29 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:07:37.249 19:59:29 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:07:37.249 19:59:29 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:37.249 19:59:29 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:37.249 19:59:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.249 19:59:29 -- common/autotest_common.sh@10 -- # set +x 00:07:37.249 ************************************ 00:07:37.249 START TEST nvmf_tcp 00:07:37.249 ************************************ 00:07:37.249 19:59:29 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:37.511 * Looking for test storage... 00:07:37.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.511 19:59:29 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.511 19:59:29 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.511 19:59:29 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.511 19:59:29 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.511 19:59:29 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.511 19:59:29 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.511 19:59:29 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:37.511 19:59:29 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:37.511 19:59:29 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:37.511 19:59:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:37.511 19:59:29 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:37.511 19:59:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:37.511 19:59:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.511 19:59:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.511 ************************************ 00:07:37.511 START TEST nvmf_example 00:07:37.511 ************************************ 00:07:37.511 19:59:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:37.511 * Looking for test storage... 00:07:37.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:37.773 19:59:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.916 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.916 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:45.916 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:45.916 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:45.916 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:45.916 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:45.916 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:45.916 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:45.916 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:45.916 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:45.917 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:45.917 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:45.917 Found net devices under 0000:31:00.0: cvl_0_0 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:45.917 Found net devices under 0000:31:00.1: cvl_0_1 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.917 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:46.178 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:46.178 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:46.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:07:46.178 00:07:46.178 --- 10.0.0.2 ping statistics --- 00:07:46.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.179 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:46.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:07:46.179 00:07:46.179 --- 10.0.0.1 ping statistics --- 00:07:46.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.179 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=4023939 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 4023939 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 4023939 ']' 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:46.179 19:59:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.179 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:47.121 19:59:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:47.121 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.355 Initializing NVMe Controllers 00:07:59.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:59.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:59.355 Initialization complete. Launching workers. 00:07:59.356 ======================================================== 00:07:59.356 Latency(us) 00:07:59.356 Device Information : IOPS MiB/s Average min max 00:07:59.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16625.96 64.95 3850.34 904.22 17960.01 00:07:59.356 ======================================================== 00:07:59.356 Total : 16625.96 64.95 3850.34 904.22 17960.01 00:07:59.356 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:59.356 rmmod nvme_tcp 00:07:59.356 rmmod nvme_fabrics 00:07:59.356 rmmod nvme_keyring 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 4023939 ']' 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 4023939 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 4023939 ']' 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 4023939 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4023939 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4023939' 00:07:59.356 killing process with pid 4023939 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 4023939 00:07:59.356 19:59:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 4023939 00:07:59.356 nvmf threads initialize successfully 00:07:59.356 bdev subsystem init successfully 00:07:59.356 created a nvmf target service 00:07:59.356 create targets's poll groups done 00:07:59.356 all subsystems of target started 00:07:59.356 nvmf target is running 00:07:59.356 all subsystems of target stopped 00:07:59.356 destroy targets's poll groups done 00:07:59.356 destroyed the nvmf target service 00:07:59.356 bdev subsystem finish successfully 00:07:59.356 nvmf threads destroy successfully 00:07:59.356 19:59:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:59.356 19:59:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:59.356 19:59:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:59.356 19:59:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:59.356 19:59:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:59.356 19:59:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.356 19:59:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.356 19:59:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.928 19:59:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:59.928 19:59:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:59.928 19:59:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:59.928 19:59:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:59.928 00:07:59.928 real 0m22.299s 00:07:59.928 user 0m47.432s 00:07:59.928 sys 0m7.356s 00:07:59.928 19:59:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.929 19:59:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:59.929 ************************************ 00:07:59.929 END TEST nvmf_example 00:07:59.929 ************************************ 00:07:59.929 19:59:52 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:59.929 19:59:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:59.929 19:59:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.929 19:59:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:59.929 ************************************ 00:07:59.929 START TEST nvmf_filesystem 00:07:59.929 ************************************ 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:59.929 * Looking for test storage... 00:07:59.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:59.929 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:00.194 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:00.194 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:00.194 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:00.194 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:00.194 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:00.194 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:00.194 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:00.194 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:00.194 #define SPDK_CONFIG_H 00:08:00.194 #define SPDK_CONFIG_APPS 1 00:08:00.194 #define SPDK_CONFIG_ARCH native 00:08:00.194 #undef SPDK_CONFIG_ASAN 00:08:00.194 #undef SPDK_CONFIG_AVAHI 00:08:00.194 #undef SPDK_CONFIG_CET 00:08:00.194 #define SPDK_CONFIG_COVERAGE 1 00:08:00.194 #define SPDK_CONFIG_CROSS_PREFIX 00:08:00.194 #undef SPDK_CONFIG_CRYPTO 00:08:00.194 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:00.194 #undef SPDK_CONFIG_CUSTOMOCF 00:08:00.194 #undef SPDK_CONFIG_DAOS 00:08:00.194 #define SPDK_CONFIG_DAOS_DIR 00:08:00.194 #define SPDK_CONFIG_DEBUG 1 00:08:00.194 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:00.194 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:00.194 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:00.194 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:00.194 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:00.194 #undef SPDK_CONFIG_DPDK_UADK 00:08:00.194 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:00.194 #define SPDK_CONFIG_EXAMPLES 1 00:08:00.194 #undef SPDK_CONFIG_FC 00:08:00.194 #define SPDK_CONFIG_FC_PATH 00:08:00.194 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:00.194 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:00.194 #undef SPDK_CONFIG_FUSE 00:08:00.194 #undef SPDK_CONFIG_FUZZER 00:08:00.194 #define SPDK_CONFIG_FUZZER_LIB 00:08:00.194 #undef SPDK_CONFIG_GOLANG 00:08:00.194 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:00.194 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:00.194 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:00.194 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:08:00.194 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:00.194 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:00.194 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:00.194 #define SPDK_CONFIG_IDXD 1 00:08:00.194 #undef SPDK_CONFIG_IDXD_KERNEL 00:08:00.194 #undef SPDK_CONFIG_IPSEC_MB 00:08:00.194 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:00.194 #define SPDK_CONFIG_ISAL 1 00:08:00.194 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:00.194 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:00.194 #define SPDK_CONFIG_LIBDIR 00:08:00.194 #undef SPDK_CONFIG_LTO 00:08:00.194 #define SPDK_CONFIG_MAX_LCORES 00:08:00.194 #define SPDK_CONFIG_NVME_CUSE 1 00:08:00.194 #undef SPDK_CONFIG_OCF 00:08:00.194 #define SPDK_CONFIG_OCF_PATH 00:08:00.194 #define SPDK_CONFIG_OPENSSL_PATH 00:08:00.194 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:00.194 #define SPDK_CONFIG_PGO_DIR 00:08:00.194 #undef SPDK_CONFIG_PGO_USE 00:08:00.194 #define SPDK_CONFIG_PREFIX /usr/local 00:08:00.194 #undef SPDK_CONFIG_RAID5F 00:08:00.194 #undef SPDK_CONFIG_RBD 00:08:00.194 #define SPDK_CONFIG_RDMA 1 00:08:00.194 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:00.194 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:00.194 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:00.194 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:00.194 #define SPDK_CONFIG_SHARED 1 00:08:00.194 #undef SPDK_CONFIG_SMA 00:08:00.194 #define SPDK_CONFIG_TESTS 1 00:08:00.194 #undef SPDK_CONFIG_TSAN 00:08:00.194 #define SPDK_CONFIG_UBLK 1 00:08:00.194 #define SPDK_CONFIG_UBSAN 1 00:08:00.194 #undef SPDK_CONFIG_UNIT_TESTS 00:08:00.194 #undef SPDK_CONFIG_URING 00:08:00.195 #define SPDK_CONFIG_URING_PATH 00:08:00.195 #undef SPDK_CONFIG_URING_ZNS 00:08:00.195 #undef SPDK_CONFIG_USDT 00:08:00.195 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:00.195 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:00.195 #undef SPDK_CONFIG_VFIO_USER 00:08:00.195 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:00.195 #define SPDK_CONFIG_VHOST 1 00:08:00.195 #define SPDK_CONFIG_VIRTIO 1 00:08:00.195 #undef SPDK_CONFIG_VTUNE 00:08:00.195 #define SPDK_CONFIG_VTUNE_DIR 00:08:00.195 #define SPDK_CONFIG_WERROR 1 00:08:00.195 #define SPDK_CONFIG_WPDK_DIR 00:08:00.195 #undef SPDK_CONFIG_XNVME 00:08:00.195 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:00.195 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:08:00.196 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j144 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 4026744 ]] 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 4026744 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.s1DGHY 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.s1DGHY/tests/target /tmp/spdk.s1DGHY 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=968249344 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4316180480 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=119714590720 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=129371009024 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9656418304 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=64629882880 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=64685502464 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55619584 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=25864224768 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=25874202624 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9977856 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=efivarfs 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=efivarfs 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=189440 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=507904 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=314368 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=64683634688 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=64685506560 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1871872 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12937093120 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12937097216 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:08:00.197 * Looking for test storage... 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=119714590720 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=11871010816 00:08:00.197 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.198 19:59:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:08.398 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:08.398 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:08.398 Found net devices under 0000:31:00.0: cvl_0_0 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:08.398 Found net devices under 0000:31:00.1: cvl_0_1 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:08.398 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:08.399 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.660 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.660 20:00:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:08.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:08:08.660 00:08:08.660 --- 10.0.0.2 ping statistics --- 00:08:08.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.660 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.429 ms 00:08:08.660 00:08:08.660 --- 10.0.0.1 ping statistics --- 00:08:08.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.660 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.660 ************************************ 00:08:08.660 START TEST nvmf_filesystem_no_in_capsule 00:08:08.660 ************************************ 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:08.660 20:00:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.661 20:00:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4031116 00:08:08.661 20:00:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4031116 00:08:08.661 20:00:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:08.661 20:00:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 4031116 ']' 00:08:08.661 20:00:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.661 20:00:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:08.661 20:00:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.661 20:00:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:08.661 20:00:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.922 [2024-05-15 20:00:01.184017] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:08.922 [2024-05-15 20:00:01.184078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.922 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.922 [2024-05-15 20:00:01.280894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.922 [2024-05-15 20:00:01.379930] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.922 [2024-05-15 20:00:01.379988] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.922 [2024-05-15 20:00:01.379997] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.922 [2024-05-15 20:00:01.380005] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.922 [2024-05-15 20:00:01.380012] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.922 [2024-05-15 20:00:01.380145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.922 [2024-05-15 20:00:01.380298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.922 [2024-05-15 20:00:01.380380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.923 [2024-05-15 20:00:01.380380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.862 [2024-05-15 20:00:02.108017] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.862 Malloc1 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.862 [2024-05-15 20:00:02.235347] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:09.862 [2024-05-15 20:00:02.235587] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:09.862 { 00:08:09.862 "name": "Malloc1", 00:08:09.862 "aliases": [ 00:08:09.862 "6e551571-4e30-4d6a-aa0f-ea24c5216e04" 00:08:09.862 ], 00:08:09.862 "product_name": "Malloc disk", 00:08:09.862 "block_size": 512, 00:08:09.862 "num_blocks": 1048576, 00:08:09.862 "uuid": "6e551571-4e30-4d6a-aa0f-ea24c5216e04", 00:08:09.862 "assigned_rate_limits": { 00:08:09.862 "rw_ios_per_sec": 0, 00:08:09.862 "rw_mbytes_per_sec": 0, 00:08:09.862 "r_mbytes_per_sec": 0, 00:08:09.862 "w_mbytes_per_sec": 0 00:08:09.862 }, 00:08:09.862 "claimed": true, 00:08:09.862 "claim_type": "exclusive_write", 00:08:09.862 "zoned": false, 00:08:09.862 "supported_io_types": { 00:08:09.862 "read": true, 00:08:09.862 "write": true, 00:08:09.862 "unmap": true, 00:08:09.862 "write_zeroes": true, 00:08:09.862 "flush": true, 00:08:09.862 "reset": true, 00:08:09.862 "compare": false, 00:08:09.862 "compare_and_write": false, 00:08:09.862 "abort": true, 00:08:09.862 "nvme_admin": false, 00:08:09.862 "nvme_io": false 00:08:09.862 }, 00:08:09.862 "memory_domains": [ 00:08:09.862 { 00:08:09.862 "dma_device_id": "system", 00:08:09.862 "dma_device_type": 1 00:08:09.862 }, 00:08:09.862 { 00:08:09.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.862 "dma_device_type": 2 00:08:09.862 } 00:08:09.862 ], 00:08:09.862 "driver_specific": {} 00:08:09.862 } 00:08:09.862 ]' 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:09.862 20:00:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:11.772 20:00:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:11.772 20:00:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:11.772 20:00:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:11.772 20:00:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:11.772 20:00:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:13.678 20:00:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:13.938 20:00:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:13.939 20:00:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.319 ************************************ 00:08:15.319 START TEST filesystem_ext4 00:08:15.319 ************************************ 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:15.319 mke2fs 1.46.5 (30-Dec-2021) 00:08:15.319 Discarding device blocks: 0/522240 done 00:08:15.319 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:15.319 Filesystem UUID: 26586a08-3db8-4c43-8bda-020babddc12b 00:08:15.319 Superblock backups stored on blocks: 00:08:15.319 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:15.319 00:08:15.319 Allocating group tables: 0/64 done 00:08:15.319 Writing inode tables: 0/64 done 00:08:15.319 Creating journal (8192 blocks): done 00:08:15.319 Writing superblocks and filesystem accounting information: 0/64 done 00:08:15.319 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:15.319 20:00:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4031116 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:15.890 00:08:15.890 real 0m0.786s 00:08:15.890 user 0m0.033s 00:08:15.890 sys 0m0.063s 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:15.890 ************************************ 00:08:15.890 END TEST filesystem_ext4 00:08:15.890 ************************************ 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.890 ************************************ 00:08:15.890 START TEST filesystem_btrfs 00:08:15.890 ************************************ 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:15.890 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:15.891 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:15.891 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:15.891 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:15.891 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:15.891 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:16.152 btrfs-progs v6.6.2 00:08:16.152 See https://btrfs.readthedocs.io for more information. 00:08:16.152 00:08:16.152 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:16.152 NOTE: several default settings have changed in version 5.15, please make sure 00:08:16.152 this does not affect your deployments: 00:08:16.152 - DUP for metadata (-m dup) 00:08:16.152 - enabled no-holes (-O no-holes) 00:08:16.152 - enabled free-space-tree (-R free-space-tree) 00:08:16.152 00:08:16.152 Label: (null) 00:08:16.152 UUID: be99380a-5b01-488a-9ca7-cb69aba81458 00:08:16.152 Node size: 16384 00:08:16.152 Sector size: 4096 00:08:16.152 Filesystem size: 510.00MiB 00:08:16.152 Block group profiles: 00:08:16.152 Data: single 8.00MiB 00:08:16.152 Metadata: DUP 32.00MiB 00:08:16.152 System: DUP 8.00MiB 00:08:16.152 SSD detected: yes 00:08:16.152 Zoned device: no 00:08:16.152 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:16.152 Runtime features: free-space-tree 00:08:16.152 Checksum: crc32c 00:08:16.152 Number of devices: 1 00:08:16.152 Devices: 00:08:16.152 ID SIZE PATH 00:08:16.152 1 510.00MiB /dev/nvme0n1p1 00:08:16.152 00:08:16.152 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:16.152 20:00:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:16.722 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:16.722 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:16.722 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:16.722 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:16.722 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:16.723 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:16.723 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4031116 00:08:16.723 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:16.723 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:16.723 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:16.723 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:16.723 00:08:16.723 real 0m0.834s 00:08:16.723 user 0m0.022s 00:08:16.723 sys 0m0.139s 00:08:16.723 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.723 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:16.723 ************************************ 00:08:16.723 END TEST filesystem_btrfs 00:08:16.723 ************************************ 00:08:16.983 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:16.983 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:16.983 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.983 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.983 ************************************ 00:08:16.983 START TEST filesystem_xfs 00:08:16.983 ************************************ 00:08:16.983 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:16.983 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:16.983 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:16.983 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:16.983 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:16.983 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:16.983 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:16.983 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:08:16.983 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:16.983 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:16.983 20:00:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:16.983 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:16.983 = sectsz=512 attr=2, projid32bit=1 00:08:16.983 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:16.983 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:16.983 data = bsize=4096 blocks=130560, imaxpct=25 00:08:16.983 = sunit=0 swidth=0 blks 00:08:16.983 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:16.983 log =internal log bsize=4096 blocks=16384, version=2 00:08:16.983 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:16.983 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:17.924 Discarding blocks...Done. 00:08:17.924 20:00:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:17.924 20:00:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.846 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.846 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:19.846 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.846 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:19.846 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:19.846 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:20.107 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4031116 00:08:20.107 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:20.107 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.107 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.107 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.107 00:08:20.107 real 0m3.110s 00:08:20.107 user 0m0.031s 00:08:20.107 sys 0m0.071s 00:08:20.107 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:20.107 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:20.107 ************************************ 00:08:20.107 END TEST filesystem_xfs 00:08:20.107 ************************************ 00:08:20.107 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:20.107 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:20.107 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:20.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4031116 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 4031116 ']' 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 4031116 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4031116 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4031116' 00:08:20.368 killing process with pid 4031116 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 4031116 00:08:20.368 [2024-05-15 20:00:12.824539] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:20.368 20:00:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 4031116 00:08:20.630 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:20.630 00:08:20.630 real 0m11.940s 00:08:20.630 user 0m46.860s 00:08:20.630 sys 0m1.277s 00:08:20.630 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:20.630 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.630 ************************************ 00:08:20.630 END TEST nvmf_filesystem_no_in_capsule 00:08:20.630 ************************************ 00:08:20.630 20:00:13 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:20.630 20:00:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:20.630 20:00:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:20.630 20:00:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.891 ************************************ 00:08:20.891 START TEST nvmf_filesystem_in_capsule 00:08:20.891 ************************************ 00:08:20.891 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:20.891 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:20.891 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:20.891 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:20.891 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:20.891 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.891 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4034197 00:08:20.891 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4034197 00:08:20.891 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:20.891 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 4034197 ']' 00:08:20.891 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.891 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:20.891 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.891 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:20.891 20:00:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:20.891 [2024-05-15 20:00:13.199596] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:20.891 [2024-05-15 20:00:13.199651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.891 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.891 [2024-05-15 20:00:13.293634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.891 [2024-05-15 20:00:13.360651] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.891 [2024-05-15 20:00:13.360703] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.891 [2024-05-15 20:00:13.360711] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.891 [2024-05-15 20:00:13.360718] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.891 [2024-05-15 20:00:13.360724] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.891 [2024-05-15 20:00:13.360829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.891 [2024-05-15 20:00:13.360964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.891 [2024-05-15 20:00:13.361122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.891 [2024-05-15 20:00:13.361123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.834 [2024-05-15 20:00:14.079026] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.834 Malloc1 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.834 [2024-05-15 20:00:14.210409] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:21.834 [2024-05-15 20:00:14.210675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.834 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:21.834 { 00:08:21.834 "name": "Malloc1", 00:08:21.834 "aliases": [ 00:08:21.834 "33ba5042-52a1-4bcc-b6ad-e5b675aef93e" 00:08:21.834 ], 00:08:21.834 "product_name": "Malloc disk", 00:08:21.834 "block_size": 512, 00:08:21.834 "num_blocks": 1048576, 00:08:21.834 "uuid": "33ba5042-52a1-4bcc-b6ad-e5b675aef93e", 00:08:21.834 "assigned_rate_limits": { 00:08:21.834 "rw_ios_per_sec": 0, 00:08:21.834 "rw_mbytes_per_sec": 0, 00:08:21.834 "r_mbytes_per_sec": 0, 00:08:21.834 "w_mbytes_per_sec": 0 00:08:21.834 }, 00:08:21.834 "claimed": true, 00:08:21.835 "claim_type": "exclusive_write", 00:08:21.835 "zoned": false, 00:08:21.835 "supported_io_types": { 00:08:21.835 "read": true, 00:08:21.835 "write": true, 00:08:21.835 "unmap": true, 00:08:21.835 "write_zeroes": true, 00:08:21.835 "flush": true, 00:08:21.835 "reset": true, 00:08:21.835 "compare": false, 00:08:21.835 "compare_and_write": false, 00:08:21.835 "abort": true, 00:08:21.835 "nvme_admin": false, 00:08:21.835 "nvme_io": false 00:08:21.835 }, 00:08:21.835 "memory_domains": [ 00:08:21.835 { 00:08:21.835 "dma_device_id": "system", 00:08:21.835 "dma_device_type": 1 00:08:21.835 }, 00:08:21.835 { 00:08:21.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.835 "dma_device_type": 2 00:08:21.835 } 00:08:21.835 ], 00:08:21.835 "driver_specific": {} 00:08:21.835 } 00:08:21.835 ]' 00:08:21.835 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:21.835 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:21.835 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:21.835 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:21.835 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:21.835 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:21.835 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:21.835 20:00:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:23.747 20:00:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:23.747 20:00:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:23.747 20:00:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:23.747 20:00:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:23.747 20:00:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:25.658 20:00:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:25.917 20:00:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:26.489 20:00:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.430 ************************************ 00:08:27.430 START TEST filesystem_in_capsule_ext4 00:08:27.430 ************************************ 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:27.430 20:00:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:27.430 mke2fs 1.46.5 (30-Dec-2021) 00:08:27.430 Discarding device blocks: 0/522240 done 00:08:27.430 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:27.430 Filesystem UUID: 1c63eaf3-4e1c-4b2c-9cc7-b18a5cee61a0 00:08:27.430 Superblock backups stored on blocks: 00:08:27.430 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:27.430 00:08:27.430 Allocating group tables: 0/64 done 00:08:27.430 Writing inode tables: 0/64 done 00:08:27.690 Creating journal (8192 blocks): done 00:08:28.634 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:08:28.634 00:08:28.634 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:28.634 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.896 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.896 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:28.896 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.896 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:28.896 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:28.896 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.896 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4034197 00:08:28.896 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.896 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.896 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.896 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.896 00:08:28.896 real 0m1.509s 00:08:28.896 user 0m0.034s 00:08:28.896 sys 0m0.063s 00:08:28.896 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.896 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:28.896 ************************************ 00:08:28.896 END TEST filesystem_in_capsule_ext4 00:08:28.896 ************************************ 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.156 ************************************ 00:08:29.156 START TEST filesystem_in_capsule_btrfs 00:08:29.156 ************************************ 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:29.156 btrfs-progs v6.6.2 00:08:29.156 See https://btrfs.readthedocs.io for more information. 00:08:29.156 00:08:29.156 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:29.156 NOTE: several default settings have changed in version 5.15, please make sure 00:08:29.156 this does not affect your deployments: 00:08:29.156 - DUP for metadata (-m dup) 00:08:29.156 - enabled no-holes (-O no-holes) 00:08:29.156 - enabled free-space-tree (-R free-space-tree) 00:08:29.156 00:08:29.156 Label: (null) 00:08:29.156 UUID: ef362b77-7237-4e9a-9c32-adad19c96a01 00:08:29.156 Node size: 16384 00:08:29.156 Sector size: 4096 00:08:29.156 Filesystem size: 510.00MiB 00:08:29.156 Block group profiles: 00:08:29.156 Data: single 8.00MiB 00:08:29.156 Metadata: DUP 32.00MiB 00:08:29.156 System: DUP 8.00MiB 00:08:29.156 SSD detected: yes 00:08:29.156 Zoned device: no 00:08:29.156 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:29.156 Runtime features: free-space-tree 00:08:29.156 Checksum: crc32c 00:08:29.156 Number of devices: 1 00:08:29.156 Devices: 00:08:29.156 ID SIZE PATH 00:08:29.156 1 510.00MiB /dev/nvme0n1p1 00:08:29.156 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:29.156 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:29.740 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:29.740 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:29.740 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:29.740 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:29.740 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:29.740 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:29.740 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4034197 00:08:29.740 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:29.740 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:29.740 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:29.740 20:00:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:29.740 00:08:29.740 real 0m0.558s 00:08:29.740 user 0m0.039s 00:08:29.740 sys 0m0.125s 00:08:29.740 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.740 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:29.740 ************************************ 00:08:29.740 END TEST filesystem_in_capsule_btrfs 00:08:29.740 ************************************ 00:08:29.740 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:29.740 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:29.740 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.740 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.740 ************************************ 00:08:29.740 START TEST filesystem_in_capsule_xfs 00:08:29.740 ************************************ 00:08:29.740 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:29.741 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:29.741 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:29.741 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:29.741 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:29.741 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:29.741 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:29.741 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:29.741 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:29.741 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:29.741 20:00:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:29.741 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:29.741 = sectsz=512 attr=2, projid32bit=1 00:08:29.741 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:29.741 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:29.741 data = bsize=4096 blocks=130560, imaxpct=25 00:08:29.741 = sunit=0 swidth=0 blks 00:08:29.741 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:29.741 log =internal log bsize=4096 blocks=16384, version=2 00:08:29.741 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:29.741 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:31.123 Discarding blocks...Done. 00:08:31.123 20:00:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:31.123 20:00:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:32.518 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.780 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:32.780 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.780 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:32.780 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:32.780 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.780 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4034197 00:08:32.780 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.780 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.780 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.780 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.780 00:08:32.780 real 0m3.079s 00:08:32.780 user 0m0.026s 00:08:32.780 sys 0m0.078s 00:08:32.780 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.780 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:32.780 ************************************ 00:08:32.780 END TEST filesystem_in_capsule_xfs 00:08:32.780 ************************************ 00:08:32.780 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:32.780 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:32.780 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:33.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4034197 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 4034197 ']' 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 4034197 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4034197 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4034197' 00:08:33.042 killing process with pid 4034197 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 4034197 00:08:33.042 [2024-05-15 20:00:25.463189] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:33.042 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 4034197 00:08:33.302 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:33.302 00:08:33.302 real 0m12.554s 00:08:33.302 user 0m49.412s 00:08:33.302 sys 0m1.275s 00:08:33.302 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.302 20:00:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.302 ************************************ 00:08:33.302 END TEST nvmf_filesystem_in_capsule 00:08:33.302 ************************************ 00:08:33.302 20:00:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:33.302 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.302 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:33.302 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.302 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:33.302 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.302 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.302 rmmod nvme_tcp 00:08:33.302 rmmod nvme_fabrics 00:08:33.302 rmmod nvme_keyring 00:08:33.562 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.562 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:33.562 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:33.562 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:33.562 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:33.562 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:33.562 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:33.562 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:33.562 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:33.562 20:00:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.562 20:00:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.562 20:00:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.474 20:00:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:35.474 00:08:35.474 real 0m35.591s 00:08:35.474 user 1m38.783s 00:08:35.474 sys 0m9.038s 00:08:35.474 20:00:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.474 20:00:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:35.474 ************************************ 00:08:35.474 END TEST nvmf_filesystem 00:08:35.474 ************************************ 00:08:35.474 20:00:27 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:35.474 20:00:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:35.474 20:00:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:35.474 20:00:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.474 ************************************ 00:08:35.474 START TEST nvmf_target_discovery 00:08:35.474 ************************************ 00:08:35.474 20:00:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:35.734 * Looking for test storage... 00:08:35.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:35.735 20:00:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.916 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:43.917 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:43.917 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:43.917 Found net devices under 0000:31:00.0: cvl_0_0 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:43.917 Found net devices under 0000:31:00.1: cvl_0_1 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.917 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:43.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:08:43.917 00:08:43.917 --- 10.0.0.2 ping statistics --- 00:08:43.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.917 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:08:44.177 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:08:44.177 00:08:44.177 --- 10.0.0.1 ping statistics --- 00:08:44.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.177 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:08:44.177 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.177 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:44.177 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:44.177 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.177 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:44.177 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:44.177 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.177 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:44.177 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:44.177 20:00:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:44.177 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:44.177 20:00:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:44.177 20:00:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:44.178 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=4041591 00:08:44.178 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 4041591 00:08:44.178 20:00:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:44.178 20:00:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 4041591 ']' 00:08:44.178 20:00:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.178 20:00:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:44.178 20:00:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.178 20:00:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:44.178 20:00:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:44.178 [2024-05-15 20:00:36.534620] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:44.178 [2024-05-15 20:00:36.534688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.178 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.178 [2024-05-15 20:00:36.632409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.439 [2024-05-15 20:00:36.730107] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.439 [2024-05-15 20:00:36.730181] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.439 [2024-05-15 20:00:36.730189] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.439 [2024-05-15 20:00:36.730196] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.439 [2024-05-15 20:00:36.730203] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.439 [2024-05-15 20:00:36.730336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.439 [2024-05-15 20:00:36.730431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.439 [2024-05-15 20:00:36.730728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.439 [2024-05-15 20:00:36.730732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.009 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:45.009 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.010 [2024-05-15 20:00:37.466101] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.010 Null1 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.010 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.270 [2024-05-15 20:00:37.526233] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:45.270 [2024-05-15 20:00:37.526473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.270 Null2 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.270 Null3 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.270 Null4 00:08:45.270 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.271 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:08:45.531 00:08:45.531 Discovery Log Number of Records 6, Generation counter 6 00:08:45.531 =====Discovery Log Entry 0====== 00:08:45.531 trtype: tcp 00:08:45.531 adrfam: ipv4 00:08:45.531 subtype: current discovery subsystem 00:08:45.531 treq: not required 00:08:45.531 portid: 0 00:08:45.531 trsvcid: 4420 00:08:45.531 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:45.531 traddr: 10.0.0.2 00:08:45.531 eflags: explicit discovery connections, duplicate discovery information 00:08:45.531 sectype: none 00:08:45.531 =====Discovery Log Entry 1====== 00:08:45.531 trtype: tcp 00:08:45.531 adrfam: ipv4 00:08:45.531 subtype: nvme subsystem 00:08:45.531 treq: not required 00:08:45.531 portid: 0 00:08:45.531 trsvcid: 4420 00:08:45.531 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:45.531 traddr: 10.0.0.2 00:08:45.531 eflags: none 00:08:45.531 sectype: none 00:08:45.531 =====Discovery Log Entry 2====== 00:08:45.531 trtype: tcp 00:08:45.531 adrfam: ipv4 00:08:45.531 subtype: nvme subsystem 00:08:45.531 treq: not required 00:08:45.531 portid: 0 00:08:45.531 trsvcid: 4420 00:08:45.531 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:45.531 traddr: 10.0.0.2 00:08:45.531 eflags: none 00:08:45.531 sectype: none 00:08:45.531 =====Discovery Log Entry 3====== 00:08:45.531 trtype: tcp 00:08:45.531 adrfam: ipv4 00:08:45.531 subtype: nvme subsystem 00:08:45.531 treq: not required 00:08:45.531 portid: 0 00:08:45.531 trsvcid: 4420 00:08:45.531 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:45.531 traddr: 10.0.0.2 00:08:45.531 eflags: none 00:08:45.531 sectype: none 00:08:45.531 =====Discovery Log Entry 4====== 00:08:45.531 trtype: tcp 00:08:45.531 adrfam: ipv4 00:08:45.531 subtype: nvme subsystem 00:08:45.531 treq: not required 00:08:45.531 portid: 0 00:08:45.531 trsvcid: 4420 00:08:45.531 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:45.531 traddr: 10.0.0.2 00:08:45.531 eflags: none 00:08:45.531 sectype: none 00:08:45.531 =====Discovery Log Entry 5====== 00:08:45.531 trtype: tcp 00:08:45.531 adrfam: ipv4 00:08:45.531 subtype: discovery subsystem referral 00:08:45.531 treq: not required 00:08:45.531 portid: 0 00:08:45.531 trsvcid: 4430 00:08:45.531 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:45.531 traddr: 10.0.0.2 00:08:45.531 eflags: none 00:08:45.531 sectype: none 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:45.531 Perform nvmf subsystem discovery via RPC 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.531 [ 00:08:45.531 { 00:08:45.531 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:45.531 "subtype": "Discovery", 00:08:45.531 "listen_addresses": [ 00:08:45.531 { 00:08:45.531 "trtype": "TCP", 00:08:45.531 "adrfam": "IPv4", 00:08:45.531 "traddr": "10.0.0.2", 00:08:45.531 "trsvcid": "4420" 00:08:45.531 } 00:08:45.531 ], 00:08:45.531 "allow_any_host": true, 00:08:45.531 "hosts": [] 00:08:45.531 }, 00:08:45.531 { 00:08:45.531 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.531 "subtype": "NVMe", 00:08:45.531 "listen_addresses": [ 00:08:45.531 { 00:08:45.531 "trtype": "TCP", 00:08:45.531 "adrfam": "IPv4", 00:08:45.531 "traddr": "10.0.0.2", 00:08:45.531 "trsvcid": "4420" 00:08:45.531 } 00:08:45.531 ], 00:08:45.531 "allow_any_host": true, 00:08:45.531 "hosts": [], 00:08:45.531 "serial_number": "SPDK00000000000001", 00:08:45.531 "model_number": "SPDK bdev Controller", 00:08:45.531 "max_namespaces": 32, 00:08:45.531 "min_cntlid": 1, 00:08:45.531 "max_cntlid": 65519, 00:08:45.531 "namespaces": [ 00:08:45.531 { 00:08:45.531 "nsid": 1, 00:08:45.531 "bdev_name": "Null1", 00:08:45.531 "name": "Null1", 00:08:45.531 "nguid": "4899D24E4AEA479FB728E857F73214C3", 00:08:45.531 "uuid": "4899d24e-4aea-479f-b728-e857f73214c3" 00:08:45.531 } 00:08:45.531 ] 00:08:45.531 }, 00:08:45.531 { 00:08:45.531 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:45.531 "subtype": "NVMe", 00:08:45.531 "listen_addresses": [ 00:08:45.531 { 00:08:45.531 "trtype": "TCP", 00:08:45.531 "adrfam": "IPv4", 00:08:45.531 "traddr": "10.0.0.2", 00:08:45.531 "trsvcid": "4420" 00:08:45.531 } 00:08:45.531 ], 00:08:45.531 "allow_any_host": true, 00:08:45.531 "hosts": [], 00:08:45.531 "serial_number": "SPDK00000000000002", 00:08:45.531 "model_number": "SPDK bdev Controller", 00:08:45.531 "max_namespaces": 32, 00:08:45.531 "min_cntlid": 1, 00:08:45.531 "max_cntlid": 65519, 00:08:45.531 "namespaces": [ 00:08:45.531 { 00:08:45.531 "nsid": 1, 00:08:45.531 "bdev_name": "Null2", 00:08:45.531 "name": "Null2", 00:08:45.531 "nguid": "DBB9ABA5826E4F2AA602F09257F754EE", 00:08:45.531 "uuid": "dbb9aba5-826e-4f2a-a602-f09257f754ee" 00:08:45.531 } 00:08:45.531 ] 00:08:45.531 }, 00:08:45.531 { 00:08:45.531 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:45.531 "subtype": "NVMe", 00:08:45.531 "listen_addresses": [ 00:08:45.531 { 00:08:45.531 "trtype": "TCP", 00:08:45.531 "adrfam": "IPv4", 00:08:45.531 "traddr": "10.0.0.2", 00:08:45.531 "trsvcid": "4420" 00:08:45.531 } 00:08:45.531 ], 00:08:45.531 "allow_any_host": true, 00:08:45.531 "hosts": [], 00:08:45.531 "serial_number": "SPDK00000000000003", 00:08:45.531 "model_number": "SPDK bdev Controller", 00:08:45.531 "max_namespaces": 32, 00:08:45.531 "min_cntlid": 1, 00:08:45.531 "max_cntlid": 65519, 00:08:45.531 "namespaces": [ 00:08:45.531 { 00:08:45.531 "nsid": 1, 00:08:45.531 "bdev_name": "Null3", 00:08:45.531 "name": "Null3", 00:08:45.531 "nguid": "77406C346145462DB96E82D4D0BCD6B6", 00:08:45.531 "uuid": "77406c34-6145-462d-b96e-82d4d0bcd6b6" 00:08:45.531 } 00:08:45.531 ] 00:08:45.531 }, 00:08:45.531 { 00:08:45.531 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:45.531 "subtype": "NVMe", 00:08:45.531 "listen_addresses": [ 00:08:45.531 { 00:08:45.531 "trtype": "TCP", 00:08:45.531 "adrfam": "IPv4", 00:08:45.531 "traddr": "10.0.0.2", 00:08:45.531 "trsvcid": "4420" 00:08:45.531 } 00:08:45.531 ], 00:08:45.531 "allow_any_host": true, 00:08:45.531 "hosts": [], 00:08:45.531 "serial_number": "SPDK00000000000004", 00:08:45.531 "model_number": "SPDK bdev Controller", 00:08:45.531 "max_namespaces": 32, 00:08:45.531 "min_cntlid": 1, 00:08:45.531 "max_cntlid": 65519, 00:08:45.531 "namespaces": [ 00:08:45.531 { 00:08:45.531 "nsid": 1, 00:08:45.531 "bdev_name": "Null4", 00:08:45.531 "name": "Null4", 00:08:45.531 "nguid": "82D1FC2185E84D6E8BC06AAF14F86EF4", 00:08:45.531 "uuid": "82d1fc21-85e8-4d6e-8bc0-6aaf14f86ef4" 00:08:45.531 } 00:08:45.531 ] 00:08:45.531 } 00:08:45.531 ] 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:45.531 20:00:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:45.531 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.531 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.531 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.531 20:00:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:45.531 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.531 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.531 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.531 20:00:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:45.531 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.532 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.791 rmmod nvme_tcp 00:08:45.791 rmmod nvme_fabrics 00:08:45.791 rmmod nvme_keyring 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 4041591 ']' 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 4041591 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 4041591 ']' 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 4041591 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4041591 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4041591' 00:08:45.791 killing process with pid 4041591 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 4041591 00:08:45.791 [2024-05-15 20:00:38.199454] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:45.791 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 4041591 00:08:46.051 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:46.051 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:46.051 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:46.051 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.051 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:46.051 20:00:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.051 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.051 20:00:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.965 20:00:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:47.965 00:08:47.965 real 0m12.437s 00:08:47.965 user 0m8.994s 00:08:47.965 sys 0m6.621s 00:08:47.965 20:00:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:47.965 20:00:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:47.965 ************************************ 00:08:47.965 END TEST nvmf_target_discovery 00:08:47.965 ************************************ 00:08:47.965 20:00:40 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:47.965 20:00:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:47.965 20:00:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:47.965 20:00:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:48.227 ************************************ 00:08:48.227 START TEST nvmf_referrals 00:08:48.227 ************************************ 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:48.227 * Looking for test storage... 00:08:48.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.227 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:48.228 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:48.228 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:48.228 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.228 20:00:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.228 20:00:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.228 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:48.228 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:48.228 20:00:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:48.228 20:00:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:56.368 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:56.369 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:56.369 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:56.369 Found net devices under 0000:31:00.0: cvl_0_0 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:56.369 Found net devices under 0000:31:00.1: cvl_0_1 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:56.369 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.631 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.631 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.631 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:56.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:08:56.631 00:08:56.631 --- 10.0.0.2 ping statistics --- 00:08:56.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.631 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:08:56.631 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.370 ms 00:08:56.631 00:08:56.631 --- 10.0.0.1 ping statistics --- 00:08:56.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.631 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:08:56.631 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.631 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:56.631 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:56.631 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.631 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:56.631 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:56.632 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.632 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:56.632 20:00:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:56.632 20:00:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:56.632 20:00:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:56.632 20:00:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:56.632 20:00:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.632 20:00:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=4046740 00:08:56.632 20:00:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 4046740 00:08:56.632 20:00:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:56.632 20:00:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 4046740 ']' 00:08:56.632 20:00:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.632 20:00:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:56.632 20:00:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.632 20:00:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:56.632 20:00:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:56.632 [2024-05-15 20:00:49.096811] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:08:56.632 [2024-05-15 20:00:49.096894] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.893 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.893 [2024-05-15 20:00:49.192211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.893 [2024-05-15 20:00:49.288328] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.893 [2024-05-15 20:00:49.288389] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.893 [2024-05-15 20:00:49.288397] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.893 [2024-05-15 20:00:49.288404] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.893 [2024-05-15 20:00:49.288410] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.893 [2024-05-15 20:00:49.288542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.893 [2024-05-15 20:00:49.288672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.893 [2024-05-15 20:00:49.288838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.893 [2024-05-15 20:00:49.288838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.833 20:00:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:57.833 20:00:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:57.833 20:00:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:57.833 20:00:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:57.833 20:00:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:57.833 [2024-05-15 20:00:50.024073] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:57.833 [2024-05-15 20:00:50.040068] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:57.833 [2024-05-15 20:00:50.040285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:57.833 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.834 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:57.834 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:57.834 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:57.834 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:57.834 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.834 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:57.834 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.834 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:57.834 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.834 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:58.094 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.355 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:58.615 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:58.615 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:58.615 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.615 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:58.615 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.615 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:58.615 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:58.615 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.615 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:58.615 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.615 20:00:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:58.615 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:58.615 20:00:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.615 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:58.615 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:58.615 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:58.615 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.615 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.615 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.615 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.615 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:58.876 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:58.876 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:58.876 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:58.876 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:58.876 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:58.876 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.876 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:58.876 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:58.876 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:58.876 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:58.876 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:58.876 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.876 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:59.136 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.396 rmmod nvme_tcp 00:08:59.396 rmmod nvme_fabrics 00:08:59.396 rmmod nvme_keyring 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 4046740 ']' 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 4046740 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 4046740 ']' 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 4046740 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4046740 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4046740' 00:08:59.396 killing process with pid 4046740 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 4046740 00:08:59.396 [2024-05-15 20:00:51.779615] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:59.396 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 4046740 00:08:59.657 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:59.657 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:59.657 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:59.657 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.657 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:59.657 20:00:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.657 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.657 20:00:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.566 20:00:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:01.566 00:09:01.566 real 0m13.500s 00:09:01.566 user 0m13.829s 00:09:01.566 sys 0m6.973s 00:09:01.566 20:00:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:01.566 20:00:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:01.566 ************************************ 00:09:01.566 END TEST nvmf_referrals 00:09:01.566 ************************************ 00:09:01.566 20:00:54 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:01.566 20:00:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:01.566 20:00:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:01.566 20:00:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:01.826 ************************************ 00:09:01.826 START TEST nvmf_connect_disconnect 00:09:01.826 ************************************ 00:09:01.826 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:01.826 * Looking for test storage... 00:09:01.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:01.827 20:00:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:09.963 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.963 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:09.963 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:09.963 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:09.964 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:09.964 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:09.964 Found net devices under 0000:31:00.0: cvl_0_0 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:09.964 Found net devices under 0000:31:00.1: cvl_0_1 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.964 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.965 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:09.965 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:09.965 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.965 20:01:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:09.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:09:09.965 00:09:09.965 --- 10.0.0.2 ping statistics --- 00:09:09.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.965 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:09:09.965 00:09:09.965 --- 10.0.0.1 ping statistics --- 00:09:09.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.965 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=4051956 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 4051956 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 4051956 ']' 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:09.965 20:01:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:09.965 [2024-05-15 20:01:02.373645] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:09:09.965 [2024-05-15 20:01:02.373713] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.965 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.224 [2024-05-15 20:01:02.466825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.224 [2024-05-15 20:01:02.545637] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.224 [2024-05-15 20:01:02.545694] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.224 [2024-05-15 20:01:02.545701] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.224 [2024-05-15 20:01:02.545708] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.224 [2024-05-15 20:01:02.545715] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.224 [2024-05-15 20:01:02.545839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.224 [2024-05-15 20:01:02.545952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.224 [2024-05-15 20:01:02.546112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.224 [2024-05-15 20:01:02.546113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.793 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:10.793 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:09:10.793 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:10.793 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:10.793 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:10.793 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.793 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:10.793 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.793 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:10.793 [2024-05-15 20:01:03.284125] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.793 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:11.053 [2024-05-15 20:01:03.347391] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:11.053 [2024-05-15 20:01:03.347622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:11.053 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:11.054 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:11.054 20:01:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:13.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.386 rmmod nvme_tcp 00:13:04.386 rmmod nvme_fabrics 00:13:04.386 rmmod nvme_keyring 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 4051956 ']' 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 4051956 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 4051956 ']' 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 4051956 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4051956 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4051956' 00:13:04.386 killing process with pid 4051956 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 4051956 00:13:04.386 [2024-05-15 20:04:56.743631] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:04.386 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 4051956 00:13:04.698 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:04.698 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:04.698 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:04.698 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.698 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:04.699 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.699 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.699 20:04:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.616 20:04:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:06.616 00:13:06.616 real 4m4.887s 00:13:06.616 user 15m31.203s 00:13:06.616 sys 0m23.023s 00:13:06.616 20:04:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:06.616 20:04:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.616 ************************************ 00:13:06.616 END TEST nvmf_connect_disconnect 00:13:06.616 ************************************ 00:13:06.616 20:04:59 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:06.616 20:04:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:06.616 20:04:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:06.616 20:04:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:06.616 ************************************ 00:13:06.616 START TEST nvmf_multitarget 00:13:06.616 ************************************ 00:13:06.616 20:04:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:06.878 * Looking for test storage... 00:13:06.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:06.878 20:04:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:15.025 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.025 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:15.025 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:15.025 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:15.025 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:15.025 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:15.025 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:15.025 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:15.025 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:15.026 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:15.026 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:15.026 Found net devices under 0000:31:00.0: cvl_0_0 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:15.026 Found net devices under 0000:31:00.1: cvl_0_1 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:15.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:13:15.026 00:13:15.026 --- 10.0.0.2 ping statistics --- 00:13:15.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.026 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:13:15.026 00:13:15.026 --- 10.0.0.1 ping statistics --- 00:13:15.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.026 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=4104014 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 4104014 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 4104014 ']' 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:15.026 20:05:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:15.026 [2024-05-15 20:05:07.522445] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:13:15.026 [2024-05-15 20:05:07.522509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.287 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.287 [2024-05-15 20:05:07.617617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.287 [2024-05-15 20:05:07.715181] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.287 [2024-05-15 20:05:07.715244] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.287 [2024-05-15 20:05:07.715253] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.287 [2024-05-15 20:05:07.715260] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.287 [2024-05-15 20:05:07.715266] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.287 [2024-05-15 20:05:07.715350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.287 [2024-05-15 20:05:07.715433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.287 [2024-05-15 20:05:07.715607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.287 [2024-05-15 20:05:07.715608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.227 20:05:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:16.227 20:05:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:13:16.227 20:05:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:16.227 20:05:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:16.227 20:05:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:16.227 20:05:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.227 20:05:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:16.227 20:05:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:16.227 20:05:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:16.227 20:05:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:16.227 20:05:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:16.227 "nvmf_tgt_1" 00:13:16.227 20:05:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:16.487 "nvmf_tgt_2" 00:13:16.487 20:05:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:16.487 20:05:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:16.487 20:05:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:16.487 20:05:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:16.748 true 00:13:16.748 20:05:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:16.748 true 00:13:16.748 20:05:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:16.748 20:05:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:17.009 rmmod nvme_tcp 00:13:17.009 rmmod nvme_fabrics 00:13:17.009 rmmod nvme_keyring 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 4104014 ']' 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 4104014 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 4104014 ']' 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 4104014 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4104014 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4104014' 00:13:17.009 killing process with pid 4104014 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 4104014 00:13:17.009 20:05:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 4104014 00:13:17.270 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:17.270 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:17.270 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:17.270 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:17.270 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:17.270 20:05:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.270 20:05:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.270 20:05:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.184 20:05:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:19.184 00:13:19.184 real 0m12.577s 00:13:19.184 user 0m10.525s 00:13:19.184 sys 0m6.740s 00:13:19.184 20:05:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:19.184 20:05:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:19.184 ************************************ 00:13:19.184 END TEST nvmf_multitarget 00:13:19.184 ************************************ 00:13:19.184 20:05:11 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:19.184 20:05:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:19.184 20:05:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:19.184 20:05:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:19.446 ************************************ 00:13:19.447 START TEST nvmf_rpc 00:13:19.447 ************************************ 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:19.447 * Looking for test storage... 00:13:19.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:19.447 20:05:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.592 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:27.593 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:27.593 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:27.593 Found net devices under 0000:31:00.0: cvl_0_0 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:27.593 Found net devices under 0000:31:00.1: cvl_0_1 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.593 20:05:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:27.593 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:27.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:13:27.855 00:13:27.855 --- 10.0.0.2 ping statistics --- 00:13:27.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.855 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:13:27.855 00:13:27.855 --- 10.0.0.1 ping statistics --- 00:13:27.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.855 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=4109223 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 4109223 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 4109223 ']' 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:27.855 20:05:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.855 [2024-05-15 20:05:20.245618] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:13:27.855 [2024-05-15 20:05:20.245667] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.855 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.856 [2024-05-15 20:05:20.336355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.117 [2024-05-15 20:05:20.401971] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.117 [2024-05-15 20:05:20.402012] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.117 [2024-05-15 20:05:20.402020] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.117 [2024-05-15 20:05:20.402028] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.117 [2024-05-15 20:05:20.402033] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.117 [2024-05-15 20:05:20.402079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.117 [2024-05-15 20:05:20.402112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.117 [2024-05-15 20:05:20.402269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.117 [2024-05-15 20:05:20.402270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.689 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:28.689 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:28.689 20:05:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:28.689 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:28.689 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.689 20:05:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.689 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:28.689 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.689 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.689 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.689 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:28.689 "tick_rate": 2400000000, 00:13:28.689 "poll_groups": [ 00:13:28.689 { 00:13:28.689 "name": "nvmf_tgt_poll_group_000", 00:13:28.689 "admin_qpairs": 0, 00:13:28.689 "io_qpairs": 0, 00:13:28.689 "current_admin_qpairs": 0, 00:13:28.689 "current_io_qpairs": 0, 00:13:28.689 "pending_bdev_io": 0, 00:13:28.689 "completed_nvme_io": 0, 00:13:28.689 "transports": [] 00:13:28.689 }, 00:13:28.689 { 00:13:28.689 "name": "nvmf_tgt_poll_group_001", 00:13:28.689 "admin_qpairs": 0, 00:13:28.689 "io_qpairs": 0, 00:13:28.689 "current_admin_qpairs": 0, 00:13:28.689 "current_io_qpairs": 0, 00:13:28.689 "pending_bdev_io": 0, 00:13:28.689 "completed_nvme_io": 0, 00:13:28.689 "transports": [] 00:13:28.689 }, 00:13:28.689 { 00:13:28.689 "name": "nvmf_tgt_poll_group_002", 00:13:28.689 "admin_qpairs": 0, 00:13:28.689 "io_qpairs": 0, 00:13:28.689 "current_admin_qpairs": 0, 00:13:28.689 "current_io_qpairs": 0, 00:13:28.689 "pending_bdev_io": 0, 00:13:28.689 "completed_nvme_io": 0, 00:13:28.689 "transports": [] 00:13:28.689 }, 00:13:28.689 { 00:13:28.689 "name": "nvmf_tgt_poll_group_003", 00:13:28.689 "admin_qpairs": 0, 00:13:28.689 "io_qpairs": 0, 00:13:28.689 "current_admin_qpairs": 0, 00:13:28.689 "current_io_qpairs": 0, 00:13:28.689 "pending_bdev_io": 0, 00:13:28.689 "completed_nvme_io": 0, 00:13:28.689 "transports": [] 00:13:28.689 } 00:13:28.689 ] 00:13:28.689 }' 00:13:28.689 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:28.689 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:28.689 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:28.689 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.950 [2024-05-15 20:05:21.278549] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:28.950 "tick_rate": 2400000000, 00:13:28.950 "poll_groups": [ 00:13:28.950 { 00:13:28.950 "name": "nvmf_tgt_poll_group_000", 00:13:28.950 "admin_qpairs": 0, 00:13:28.950 "io_qpairs": 0, 00:13:28.950 "current_admin_qpairs": 0, 00:13:28.950 "current_io_qpairs": 0, 00:13:28.950 "pending_bdev_io": 0, 00:13:28.950 "completed_nvme_io": 0, 00:13:28.950 "transports": [ 00:13:28.950 { 00:13:28.950 "trtype": "TCP" 00:13:28.950 } 00:13:28.950 ] 00:13:28.950 }, 00:13:28.950 { 00:13:28.950 "name": "nvmf_tgt_poll_group_001", 00:13:28.950 "admin_qpairs": 0, 00:13:28.950 "io_qpairs": 0, 00:13:28.950 "current_admin_qpairs": 0, 00:13:28.950 "current_io_qpairs": 0, 00:13:28.950 "pending_bdev_io": 0, 00:13:28.950 "completed_nvme_io": 0, 00:13:28.950 "transports": [ 00:13:28.950 { 00:13:28.950 "trtype": "TCP" 00:13:28.950 } 00:13:28.950 ] 00:13:28.950 }, 00:13:28.950 { 00:13:28.950 "name": "nvmf_tgt_poll_group_002", 00:13:28.950 "admin_qpairs": 0, 00:13:28.950 "io_qpairs": 0, 00:13:28.950 "current_admin_qpairs": 0, 00:13:28.950 "current_io_qpairs": 0, 00:13:28.950 "pending_bdev_io": 0, 00:13:28.950 "completed_nvme_io": 0, 00:13:28.950 "transports": [ 00:13:28.950 { 00:13:28.950 "trtype": "TCP" 00:13:28.950 } 00:13:28.950 ] 00:13:28.950 }, 00:13:28.950 { 00:13:28.950 "name": "nvmf_tgt_poll_group_003", 00:13:28.950 "admin_qpairs": 0, 00:13:28.950 "io_qpairs": 0, 00:13:28.950 "current_admin_qpairs": 0, 00:13:28.950 "current_io_qpairs": 0, 00:13:28.950 "pending_bdev_io": 0, 00:13:28.950 "completed_nvme_io": 0, 00:13:28.950 "transports": [ 00:13:28.950 { 00:13:28.950 "trtype": "TCP" 00:13:28.950 } 00:13:28.950 ] 00:13:28.950 } 00:13:28.950 ] 00:13:28.950 }' 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.950 Malloc1 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.950 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:28.951 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.951 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.951 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.951 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:28.951 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.951 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.951 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.951 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:28.951 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.951 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.212 [2024-05-15 20:05:21.466242] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:29.212 [2024-05-15 20:05:21.466496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:29.212 [2024-05-15 20:05:21.493170] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:13:29.212 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:29.212 could not add new controller: failed to write to nvme-fabrics device 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.212 20:05:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.596 20:05:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:30.597 20:05:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:30.597 20:05:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.597 20:05:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:30.597 20:05:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:32.512 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:32.512 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:32.774 [2024-05-15 20:05:25.167590] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:13:32.774 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:32.774 could not add new controller: failed to write to nvme-fabrics device 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.774 20:05:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.689 20:05:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:34.689 20:05:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:34.689 20:05:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.689 20:05:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:34.689 20:05:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:36.221 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:36.221 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.482 [2024-05-15 20:05:28.841414] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.482 20:05:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:38.398 20:05:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.398 20:05:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:38.398 20:05:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.398 20:05:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:38.398 20:05:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.311 [2024-05-15 20:05:32.587916] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.311 20:05:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:41.695 20:05:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:41.695 20:05:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:41.695 20:05:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.695 20:05:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:41.695 20:05:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:43.610 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:43.610 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:43.610 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.871 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.872 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.872 [2024-05-15 20:05:36.307108] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.872 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.872 20:05:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:43.872 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.872 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.872 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.872 20:05:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:43.872 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.872 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.872 20:05:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.872 20:05:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.785 20:05:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:45.785 20:05:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:45.785 20:05:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.785 20:05:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:45.785 20:05:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:47.700 20:05:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:47.700 20:05:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:47.700 20:05:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.700 20:05:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:47.700 20:05:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.700 20:05:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:47.700 20:05:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.700 [2024-05-15 20:05:40.130290] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.700 20:05:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:49.613 20:05:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:49.613 20:05:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:49.613 20:05:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.613 20:05:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:49.613 20:05:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:51.523 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.524 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.524 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.524 20:05:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.524 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.524 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.524 [2024-05-15 20:05:43.846605] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.524 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.524 20:05:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:51.524 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.524 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.524 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.524 20:05:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:51.524 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.524 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.524 20:05:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.524 20:05:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:52.908 20:05:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:52.908 20:05:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:52.908 20:05:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.908 20:05:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:52.908 20:05:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:55.453 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:55.453 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:55.453 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.453 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 [2024-05-15 20:05:47.568624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 [2024-05-15 20:05:47.632794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 [2024-05-15 20:05:47.688973] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.454 [2024-05-15 20:05:47.745150] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.454 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.455 [2024-05-15 20:05:47.805364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:55.455 "tick_rate": 2400000000, 00:13:55.455 "poll_groups": [ 00:13:55.455 { 00:13:55.455 "name": "nvmf_tgt_poll_group_000", 00:13:55.455 "admin_qpairs": 0, 00:13:55.455 "io_qpairs": 224, 00:13:55.455 "current_admin_qpairs": 0, 00:13:55.455 "current_io_qpairs": 0, 00:13:55.455 "pending_bdev_io": 0, 00:13:55.455 "completed_nvme_io": 253, 00:13:55.455 "transports": [ 00:13:55.455 { 00:13:55.455 "trtype": "TCP" 00:13:55.455 } 00:13:55.455 ] 00:13:55.455 }, 00:13:55.455 { 00:13:55.455 "name": "nvmf_tgt_poll_group_001", 00:13:55.455 "admin_qpairs": 1, 00:13:55.455 "io_qpairs": 223, 00:13:55.455 "current_admin_qpairs": 0, 00:13:55.455 "current_io_qpairs": 0, 00:13:55.455 "pending_bdev_io": 0, 00:13:55.455 "completed_nvme_io": 226, 00:13:55.455 "transports": [ 00:13:55.455 { 00:13:55.455 "trtype": "TCP" 00:13:55.455 } 00:13:55.455 ] 00:13:55.455 }, 00:13:55.455 { 00:13:55.455 "name": "nvmf_tgt_poll_group_002", 00:13:55.455 "admin_qpairs": 6, 00:13:55.455 "io_qpairs": 218, 00:13:55.455 "current_admin_qpairs": 0, 00:13:55.455 "current_io_qpairs": 0, 00:13:55.455 "pending_bdev_io": 0, 00:13:55.455 "completed_nvme_io": 317, 00:13:55.455 "transports": [ 00:13:55.455 { 00:13:55.455 "trtype": "TCP" 00:13:55.455 } 00:13:55.455 ] 00:13:55.455 }, 00:13:55.455 { 00:13:55.455 "name": "nvmf_tgt_poll_group_003", 00:13:55.455 "admin_qpairs": 0, 00:13:55.455 "io_qpairs": 224, 00:13:55.455 "current_admin_qpairs": 0, 00:13:55.455 "current_io_qpairs": 0, 00:13:55.455 "pending_bdev_io": 0, 00:13:55.455 "completed_nvme_io": 443, 00:13:55.455 "transports": [ 00:13:55.455 { 00:13:55.455 "trtype": "TCP" 00:13:55.455 } 00:13:55.455 ] 00:13:55.455 } 00:13:55.455 ] 00:13:55.455 }' 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:55.455 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:55.717 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:55.717 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:55.717 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:55.717 20:05:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:55.717 20:05:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:55.717 20:05:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:55.717 20:05:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:55.717 20:05:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:55.717 20:05:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:55.717 20:05:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:55.717 rmmod nvme_tcp 00:13:55.717 rmmod nvme_fabrics 00:13:55.717 rmmod nvme_keyring 00:13:55.717 20:05:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:55.717 20:05:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:55.717 20:05:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:55.717 20:05:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 4109223 ']' 00:13:55.717 20:05:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 4109223 00:13:55.717 20:05:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 4109223 ']' 00:13:55.717 20:05:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 4109223 00:13:55.717 20:05:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:13:55.717 20:05:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:55.717 20:05:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4109223 00:13:55.717 20:05:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:55.717 20:05:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:55.717 20:05:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4109223' 00:13:55.717 killing process with pid 4109223 00:13:55.717 20:05:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 4109223 00:13:55.717 [2024-05-15 20:05:48.091257] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:55.717 20:05:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 4109223 00:13:55.978 20:05:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:55.978 20:05:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:55.978 20:05:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:55.978 20:05:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:55.978 20:05:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:55.978 20:05:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.978 20:05:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.978 20:05:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.893 20:05:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:57.893 00:13:57.893 real 0m38.581s 00:13:57.893 user 1m53.566s 00:13:57.893 sys 0m8.110s 00:13:57.893 20:05:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:57.893 20:05:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.893 ************************************ 00:13:57.893 END TEST nvmf_rpc 00:13:57.893 ************************************ 00:13:57.893 20:05:50 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:57.893 20:05:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:57.893 20:05:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:57.893 20:05:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:57.893 ************************************ 00:13:57.893 START TEST nvmf_invalid 00:13:57.893 ************************************ 00:13:57.893 20:05:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:58.155 * Looking for test storage... 00:13:58.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:58.155 20:05:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:06.303 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:06.303 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:06.303 Found net devices under 0000:31:00.0: cvl_0_0 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:06.303 Found net devices under 0000:31:00.1: cvl_0_1 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:06.303 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:06.304 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:06.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:14:06.564 00:14:06.564 --- 10.0.0.2 ping statistics --- 00:14:06.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.564 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:14:06.564 00:14:06.564 --- 10.0.0.1 ping statistics --- 00:14:06.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.564 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=4119560 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 4119560 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 4119560 ']' 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:06.564 20:05:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:06.564 [2024-05-15 20:05:59.036591] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:14:06.564 [2024-05-15 20:05:59.036656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.901 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.901 [2024-05-15 20:05:59.127072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.901 [2024-05-15 20:05:59.203948] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.901 [2024-05-15 20:05:59.204006] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.901 [2024-05-15 20:05:59.204014] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.901 [2024-05-15 20:05:59.204020] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.901 [2024-05-15 20:05:59.204026] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.901 [2024-05-15 20:05:59.204161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.901 [2024-05-15 20:05:59.204302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.901 [2024-05-15 20:05:59.204469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.901 [2024-05-15 20:05:59.204470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.471 20:05:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:07.471 20:05:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:14:07.471 20:05:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:07.471 20:05:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:07.471 20:05:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:07.471 20:05:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.471 20:05:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:07.471 20:05:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31805 00:14:07.731 [2024-05-15 20:06:00.046386] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:07.731 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:07.731 { 00:14:07.731 "nqn": "nqn.2016-06.io.spdk:cnode31805", 00:14:07.731 "tgt_name": "foobar", 00:14:07.731 "method": "nvmf_create_subsystem", 00:14:07.731 "req_id": 1 00:14:07.731 } 00:14:07.732 Got JSON-RPC error response 00:14:07.732 response: 00:14:07.732 { 00:14:07.732 "code": -32603, 00:14:07.732 "message": "Unable to find target foobar" 00:14:07.732 }' 00:14:07.732 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:07.732 { 00:14:07.732 "nqn": "nqn.2016-06.io.spdk:cnode31805", 00:14:07.732 "tgt_name": "foobar", 00:14:07.732 "method": "nvmf_create_subsystem", 00:14:07.732 "req_id": 1 00:14:07.732 } 00:14:07.732 Got JSON-RPC error response 00:14:07.732 response: 00:14:07.732 { 00:14:07.732 "code": -32603, 00:14:07.732 "message": "Unable to find target foobar" 00:14:07.732 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:07.732 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:07.732 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode338 00:14:07.993 [2024-05-15 20:06:00.275190] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode338: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:07.993 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:07.993 { 00:14:07.993 "nqn": "nqn.2016-06.io.spdk:cnode338", 00:14:07.993 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:07.993 "method": "nvmf_create_subsystem", 00:14:07.993 "req_id": 1 00:14:07.993 } 00:14:07.993 Got JSON-RPC error response 00:14:07.993 response: 00:14:07.993 { 00:14:07.993 "code": -32602, 00:14:07.993 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:07.993 }' 00:14:07.993 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:07.993 { 00:14:07.993 "nqn": "nqn.2016-06.io.spdk:cnode338", 00:14:07.993 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:07.993 "method": "nvmf_create_subsystem", 00:14:07.993 "req_id": 1 00:14:07.993 } 00:14:07.993 Got JSON-RPC error response 00:14:07.993 response: 00:14:07.993 { 00:14:07.993 "code": -32602, 00:14:07.993 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:07.993 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:07.993 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:07.993 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17461 00:14:08.254 [2024-05-15 20:06:00.499851] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17461: invalid model number 'SPDK_Controller' 00:14:08.254 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:08.254 { 00:14:08.254 "nqn": "nqn.2016-06.io.spdk:cnode17461", 00:14:08.254 "model_number": "SPDK_Controller\u001f", 00:14:08.254 "method": "nvmf_create_subsystem", 00:14:08.254 "req_id": 1 00:14:08.254 } 00:14:08.255 Got JSON-RPC error response 00:14:08.255 response: 00:14:08.255 { 00:14:08.255 "code": -32602, 00:14:08.255 "message": "Invalid MN SPDK_Controller\u001f" 00:14:08.255 }' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:08.255 { 00:14:08.255 "nqn": "nqn.2016-06.io.spdk:cnode17461", 00:14:08.255 "model_number": "SPDK_Controller\u001f", 00:14:08.255 "method": "nvmf_create_subsystem", 00:14:08.255 "req_id": 1 00:14:08.255 } 00:14:08.255 Got JSON-RPC error response 00:14:08.255 response: 00:14:08.255 { 00:14:08.255 "code": -32602, 00:14:08.255 "message": "Invalid MN SPDK_Controller\u001f" 00:14:08.255 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ + == \- ]] 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '+f^1R=],H>~[/&1fw[PKW' 00:14:08.255 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '+f^1R=],H>~[/&1fw[PKW' nqn.2016-06.io.spdk:cnode32096 00:14:08.517 [2024-05-15 20:06:00.885135] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32096: invalid serial number '+f^1R=],H>~[/&1fw[PKW' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:08.517 { 00:14:08.517 "nqn": "nqn.2016-06.io.spdk:cnode32096", 00:14:08.517 "serial_number": "+f^1R=],H>~[/&1fw[PKW", 00:14:08.517 "method": "nvmf_create_subsystem", 00:14:08.517 "req_id": 1 00:14:08.517 } 00:14:08.517 Got JSON-RPC error response 00:14:08.517 response: 00:14:08.517 { 00:14:08.517 "code": -32602, 00:14:08.517 "message": "Invalid SN +f^1R=],H>~[/&1fw[PKW" 00:14:08.517 }' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:08.517 { 00:14:08.517 "nqn": "nqn.2016-06.io.spdk:cnode32096", 00:14:08.517 "serial_number": "+f^1R=],H>~[/&1fw[PKW", 00:14:08.517 "method": "nvmf_create_subsystem", 00:14:08.517 "req_id": 1 00:14:08.517 } 00:14:08.517 Got JSON-RPC error response 00:14:08.517 response: 00:14:08.517 { 00:14:08.517 "code": -32602, 00:14:08.517 "message": "Invalid SN +f^1R=],H>~[/&1fw[PKW" 00:14:08.517 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.517 20:06:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.517 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:08.517 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:08.517 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:08.517 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.517 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.517 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:08.517 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:08.517 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:08.517 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.517 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:08.779 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ / == \- ]] 00:14:08.780 20:06:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '/'\''B{U /dev/null' 00:14:11.135 20:06:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.684 20:06:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:13.684 00:14:13.684 real 0m15.275s 00:14:13.684 user 0m22.505s 00:14:13.684 sys 0m7.396s 00:14:13.684 20:06:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:13.684 20:06:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:13.684 ************************************ 00:14:13.684 END TEST nvmf_invalid 00:14:13.684 ************************************ 00:14:13.684 20:06:05 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:13.684 20:06:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:13.684 20:06:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:13.684 20:06:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:13.684 ************************************ 00:14:13.684 START TEST nvmf_abort 00:14:13.684 ************************************ 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:13.684 * Looking for test storage... 00:14:13.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:14:13.684 20:06:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:21.832 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:21.832 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:21.832 Found net devices under 0000:31:00.0: cvl_0_0 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:21.832 Found net devices under 0000:31:00.1: cvl_0_1 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:21.832 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:21.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:14:21.833 00:14:21.833 --- 10.0.0.2 ping statistics --- 00:14:21.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.833 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:14:21.833 00:14:21.833 --- 10.0.0.1 ping statistics --- 00:14:21.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.833 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=4125681 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 4125681 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 4125681 ']' 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:21.833 20:06:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:21.833 [2024-05-15 20:06:13.588372] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:14:21.833 [2024-05-15 20:06:13.588436] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.833 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.833 [2024-05-15 20:06:13.667318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:21.833 [2024-05-15 20:06:13.743023] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.833 [2024-05-15 20:06:13.743062] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.833 [2024-05-15 20:06:13.743073] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.833 [2024-05-15 20:06:13.743080] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.833 [2024-05-15 20:06:13.743086] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.833 [2024-05-15 20:06:13.743194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.833 [2024-05-15 20:06:13.743366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.833 [2024-05-15 20:06:13.743581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:22.093 [2024-05-15 20:06:14.515372] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:22.093 Malloc0 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:22.093 Delay0 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.093 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:22.352 [2024-05-15 20:06:14.595627] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:22.352 [2024-05-15 20:06:14.595849] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.352 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.352 20:06:14 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:22.352 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.352 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:22.352 20:06:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.352 20:06:14 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:22.352 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.352 [2024-05-15 20:06:14.725525] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:24.893 Initializing NVMe Controllers 00:14:24.893 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:24.893 controller IO queue size 128 less than required 00:14:24.893 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:24.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:24.893 Initialization complete. Launching workers. 00:14:24.893 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 33464 00:14:24.893 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33529, failed to submit 62 00:14:24.893 success 33468, unsuccess 61, failed 0 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:24.893 rmmod nvme_tcp 00:14:24.893 rmmod nvme_fabrics 00:14:24.893 rmmod nvme_keyring 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 4125681 ']' 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 4125681 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 4125681 ']' 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 4125681 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4125681 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4125681' 00:14:24.893 killing process with pid 4125681 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 4125681 00:14:24.893 [2024-05-15 20:06:16.899293] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:24.893 20:06:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 4125681 00:14:24.893 20:06:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:24.893 20:06:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:24.893 20:06:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:24.893 20:06:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:24.893 20:06:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:24.893 20:06:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.893 20:06:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.893 20:06:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.808 20:06:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:26.808 00:14:26.808 real 0m13.362s 00:14:26.808 user 0m13.728s 00:14:26.808 sys 0m6.607s 00:14:26.808 20:06:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:26.808 20:06:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:26.808 ************************************ 00:14:26.808 END TEST nvmf_abort 00:14:26.808 ************************************ 00:14:26.808 20:06:19 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:26.808 20:06:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:26.808 20:06:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:26.808 20:06:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:26.808 ************************************ 00:14:26.808 START TEST nvmf_ns_hotplug_stress 00:14:26.808 ************************************ 00:14:26.808 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:26.808 * Looking for test storage... 00:14:26.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.808 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.808 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:27.069 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:27.070 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:27.070 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.070 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:27.070 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:27.070 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.070 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:27.070 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:27.070 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:27.070 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.070 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.070 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.070 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:27.070 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:27.070 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:27.070 20:06:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:35.216 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:35.216 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:35.216 Found net devices under 0000:31:00.0: cvl_0_0 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:35.216 Found net devices under 0000:31:00.1: cvl_0_1 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:35.216 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:35.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:14:35.217 00:14:35.217 --- 10.0.0.2 ping statistics --- 00:14:35.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.217 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:14:35.217 00:14:35.217 --- 10.0.0.1 ping statistics --- 00:14:35.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.217 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:35.217 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:35.478 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:35.478 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:35.478 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:35.478 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.478 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=4131051 00:14:35.478 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 4131051 00:14:35.478 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:35.478 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 4131051 ']' 00:14:35.478 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.478 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:35.478 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.478 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:35.478 20:06:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.478 [2024-05-15 20:06:27.793924] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:14:35.478 [2024-05-15 20:06:27.793973] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.478 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.478 [2024-05-15 20:06:27.868092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:35.478 [2024-05-15 20:06:27.932427] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.478 [2024-05-15 20:06:27.932464] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.478 [2024-05-15 20:06:27.932472] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.478 [2024-05-15 20:06:27.932479] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.478 [2024-05-15 20:06:27.932484] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.478 [2024-05-15 20:06:27.932593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.478 [2024-05-15 20:06:27.932750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.478 [2024-05-15 20:06:27.932750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.422 20:06:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:36.422 20:06:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:14:36.422 20:06:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:36.422 20:06:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:36.422 20:06:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.422 20:06:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.422 20:06:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:36.422 20:06:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:36.422 [2024-05-15 20:06:28.848687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.422 20:06:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:36.684 20:06:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.971 [2024-05-15 20:06:29.290250] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:36.971 [2024-05-15 20:06:29.290486] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.971 20:06:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:37.305 20:06:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:37.305 Malloc0 00:14:37.305 20:06:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:37.586 Delay0 00:14:37.586 20:06:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.847 20:06:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:38.107 NULL1 00:14:38.107 20:06:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:38.107 20:06:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4131750 00:14:38.107 20:06:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:38.107 20:06:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:38.107 20:06:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.367 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.310 Read completed with error (sct=0, sc=11) 00:14:39.310 20:06:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.571 20:06:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:39.571 20:06:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:39.831 true 00:14:39.831 20:06:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:39.831 20:06:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.772 20:06:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.772 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:40.772 20:06:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:40.772 20:06:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:41.033 true 00:14:41.033 20:06:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:41.033 20:06:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.294 20:06:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.555 20:06:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:41.555 20:06:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:41.555 true 00:14:41.555 20:06:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:41.555 20:06:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.521 20:06:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:42.781 20:06:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:42.781 20:06:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:43.042 true 00:14:43.042 20:06:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:43.042 20:06:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.303 20:06:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.563 20:06:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:43.563 20:06:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:43.563 true 00:14:43.563 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:43.563 20:06:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.947 20:06:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.947 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.947 20:06:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:44.947 20:06:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:45.208 true 00:14:45.208 20:06:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:45.208 20:06:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:46.150 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.150 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:46.150 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:46.411 true 00:14:46.411 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:46.411 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.671 20:06:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.932 20:06:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:46.932 20:06:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:46.932 true 00:14:46.932 20:06:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:46.932 20:06:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.318 20:06:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.318 20:06:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:48.318 20:06:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:48.318 true 00:14:48.578 20:06:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:48.578 20:06:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.578 20:06:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.838 20:06:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:48.838 20:06:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:49.099 true 00:14:49.099 20:06:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:49.099 20:06:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:50.043 20:06:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.304 20:06:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:50.304 20:06:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:50.564 true 00:14:50.564 20:06:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:50.564 20:06:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.825 20:06:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.825 20:06:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:50.825 20:06:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:51.085 true 00:14:51.085 20:06:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:51.085 20:06:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:52.028 20:06:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:52.288 20:06:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:52.288 20:06:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:52.549 true 00:14:52.549 20:06:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:52.549 20:06:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.809 20:06:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.070 20:06:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:53.070 20:06:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:53.070 true 00:14:53.070 20:06:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:53.070 20:06:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.455 20:06:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:54.456 20:06:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:54.456 20:06:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:54.716 true 00:14:54.716 20:06:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:54.716 20:06:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.662 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:55.662 20:06:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.662 20:06:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:55.662 20:06:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:55.922 true 00:14:55.922 20:06:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:55.922 20:06:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.183 20:06:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.444 20:06:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:56.444 20:06:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:56.444 true 00:14:56.705 20:06:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:56.705 20:06:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:57.645 20:06:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:57.903 20:06:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:57.903 20:06:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:58.162 true 00:14:58.162 20:06:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:58.162 20:06:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.422 20:06:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.681 20:06:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:58.681 20:06:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:58.681 true 00:14:58.681 20:06:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:14:58.681 20:06:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.623 20:06:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:59.883 20:06:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:59.883 20:06:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:00.144 true 00:15:00.144 20:06:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:15:00.144 20:06:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.405 20:06:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.665 20:06:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:00.665 20:06:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:00.665 true 00:15:00.924 20:06:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:15:00.924 20:06:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.867 20:06:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:01.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:01.867 20:06:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:01.867 20:06:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:02.127 true 00:15:02.127 20:06:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:15:02.127 20:06:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.387 20:06:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:02.648 20:06:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:02.648 20:06:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:02.648 true 00:15:02.648 20:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:15:02.648 20:06:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.031 20:06:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.031 20:06:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:04.031 20:06:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:04.292 true 00:15:04.292 20:06:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:15:04.292 20:06:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.569 20:06:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.569 20:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:04.569 20:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:04.895 true 00:15:04.895 20:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:15:04.895 20:06:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.836 20:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.096 20:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:06.096 20:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:06.096 true 00:15:06.356 20:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:15:06.356 20:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.356 20:06:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.617 20:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:06.617 20:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:06.878 true 00:15:06.878 20:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:15:06.878 20:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.819 20:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.094 20:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:08.094 20:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:08.354 true 00:15:08.354 20:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:15:08.354 20:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.354 Initializing NVMe Controllers 00:15:08.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:08.354 Controller IO queue size 128, less than required. 00:15:08.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:08.354 Controller IO queue size 128, less than required. 00:15:08.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:08.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:08.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:08.354 Initialization complete. Launching workers. 00:15:08.354 ======================================================== 00:15:08.354 Latency(us) 00:15:08.354 Device Information : IOPS MiB/s Average min max 00:15:08.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 915.35 0.45 83487.19 2471.73 1154089.10 00:15:08.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17971.07 8.77 7124.27 2112.11 504878.77 00:15:08.354 ======================================================== 00:15:08.354 Total : 18886.42 9.22 10825.26 2112.11 1154089.10 00:15:08.354 00:15:08.613 20:07:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.872 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:15:08.872 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:08.872 true 00:15:08.872 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4131750 00:15:08.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4131750) - No such process 00:15:08.872 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4131750 00:15:08.872 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.131 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:09.390 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:09.390 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:09.390 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:09.390 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:09.390 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:09.390 null0 00:15:09.390 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:09.390 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:09.390 20:07:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:09.648 null1 00:15:09.648 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:09.648 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:09.648 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:09.908 null2 00:15:09.908 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:09.908 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:09.908 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:10.168 null3 00:15:10.168 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:10.168 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:10.168 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:10.429 null4 00:15:10.429 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:10.429 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:10.429 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:10.689 null5 00:15:10.689 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:10.689 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:10.689 20:07:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:10.689 null6 00:15:10.689 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:10.689 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:10.689 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:10.950 null7 00:15:10.950 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:10.950 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4138191 4138192 4138194 4138196 4138198 4138200 4138202 4138203 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.951 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:11.212 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.213 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:11.213 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:11.213 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:11.213 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:11.213 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:11.213 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:11.213 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.473 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.474 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:11.474 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.474 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.474 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:11.735 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:11.735 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:11.735 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:11.735 20:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.735 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:11.997 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:11.997 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.997 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:11.997 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:11.997 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:11.997 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:11.997 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:11.997 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.258 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:12.519 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:12.519 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:12.519 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.519 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:12.519 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:12.519 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:12.519 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:12.519 20:07:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.780 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.781 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:12.781 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.781 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.781 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:12.781 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.781 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.042 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.043 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:13.043 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.043 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.043 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:13.043 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.043 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.043 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:13.043 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.043 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.043 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:13.043 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.043 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.043 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:13.304 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:13.304 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:13.304 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.304 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:13.304 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:13.304 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:13.304 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:13.304 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.566 20:07:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:13.827 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:13.827 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:13.827 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.827 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:13.827 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:13.827 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:13.827 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:13.827 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.088 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.349 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.350 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:14.350 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.350 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.350 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:14.350 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.350 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.350 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:14.350 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.350 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.350 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:14.609 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:14.609 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.609 20:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:14.609 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:14.609 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:14.609 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:14.609 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:14.610 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.870 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:15.131 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:15.131 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:15.131 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.131 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:15.131 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:15.131 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:15.131 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:15.131 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.392 rmmod nvme_tcp 00:15:15.392 rmmod nvme_fabrics 00:15:15.392 rmmod nvme_keyring 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 4131051 ']' 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 4131051 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 4131051 ']' 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 4131051 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4131051 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4131051' 00:15:15.392 killing process with pid 4131051 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 4131051 00:15:15.392 [2024-05-15 20:07:07.817790] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:15.392 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 4131051 00:15:15.653 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:15.653 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:15.653 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:15.653 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.653 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:15.653 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.653 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.653 20:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.567 20:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:17.567 00:15:17.567 real 0m50.832s 00:15:17.567 user 3m18.967s 00:15:17.567 sys 0m16.333s 00:15:17.567 20:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:17.567 20:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.567 ************************************ 00:15:17.567 END TEST nvmf_ns_hotplug_stress 00:15:17.567 ************************************ 00:15:17.828 20:07:10 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:17.828 20:07:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:17.828 20:07:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:17.828 20:07:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:17.828 ************************************ 00:15:17.828 START TEST nvmf_connect_stress 00:15:17.828 ************************************ 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:17.828 * Looking for test storage... 00:15:17.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.828 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.829 20:07:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:17.829 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:17.829 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.829 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:17.829 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:17.829 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:17.829 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.829 20:07:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.829 20:07:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.829 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:17.829 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:17.829 20:07:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:17.829 20:07:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:25.973 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:25.973 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:25.973 Found net devices under 0000:31:00.0: cvl_0_0 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:25.973 Found net devices under 0000:31:00.1: cvl_0_1 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.973 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:26.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:15:26.235 00:15:26.235 --- 10.0.0.2 ping statistics --- 00:15:26.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.235 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:26.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:15:26.235 00:15:26.235 --- 10.0.0.1 ping statistics --- 00:15:26.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.235 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=4143889 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 4143889 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 4143889 ']' 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:26.235 20:07:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.497 [2024-05-15 20:07:18.755120] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:15:26.497 [2024-05-15 20:07:18.755192] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.497 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.497 [2024-05-15 20:07:18.834032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:26.497 [2024-05-15 20:07:18.908080] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.497 [2024-05-15 20:07:18.908119] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.497 [2024-05-15 20:07:18.908127] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.497 [2024-05-15 20:07:18.908134] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.497 [2024-05-15 20:07:18.908139] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.497 [2024-05-15 20:07:18.908244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.497 [2024-05-15 20:07:18.908375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.497 [2024-05-15 20:07:18.908573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.439 [2024-05-15 20:07:19.680727] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.439 [2024-05-15 20:07:19.704983] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:27.439 [2024-05-15 20:07:19.705167] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.439 NULL1 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.439 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=4144072 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.440 20:07:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.701 20:07:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.701 20:07:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:27.701 20:07:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.701 20:07:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.701 20:07:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.963 20:07:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.963 20:07:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:27.963 20:07:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.963 20:07:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.963 20:07:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.535 20:07:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.535 20:07:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:28.535 20:07:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.535 20:07:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.535 20:07:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.796 20:07:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.796 20:07:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:28.796 20:07:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.796 20:07:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.796 20:07:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.056 20:07:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.056 20:07:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:29.056 20:07:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.056 20:07:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.056 20:07:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.316 20:07:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.316 20:07:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:29.316 20:07:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.316 20:07:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.316 20:07:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.577 20:07:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.836 20:07:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:29.836 20:07:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.836 20:07:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.836 20:07:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.096 20:07:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.096 20:07:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:30.096 20:07:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.096 20:07:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.096 20:07:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.357 20:07:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.357 20:07:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:30.357 20:07:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.357 20:07:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.357 20:07:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.616 20:07:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.616 20:07:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:30.616 20:07:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.616 20:07:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.616 20:07:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.186 20:07:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.186 20:07:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:31.186 20:07:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.186 20:07:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.186 20:07:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.446 20:07:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.446 20:07:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:31.446 20:07:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.446 20:07:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.446 20:07:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.707 20:07:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.707 20:07:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:31.707 20:07:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.707 20:07:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.707 20:07:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.967 20:07:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.967 20:07:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:31.967 20:07:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.967 20:07:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.967 20:07:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.226 20:07:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.226 20:07:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:32.226 20:07:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.226 20:07:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.226 20:07:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.797 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.797 20:07:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:32.797 20:07:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.797 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.797 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.058 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.058 20:07:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:33.058 20:07:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.058 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.058 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.318 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.318 20:07:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:33.318 20:07:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.318 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.318 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.578 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.578 20:07:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:33.578 20:07:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.578 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.578 20:07:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.839 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.839 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:33.839 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.839 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.839 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.408 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.408 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:34.409 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.409 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.409 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.669 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.669 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:34.669 20:07:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.669 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.669 20:07:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.930 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.930 20:07:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:34.930 20:07:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.930 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.930 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.191 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.191 20:07:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:35.191 20:07:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.191 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.191 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.452 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.452 20:07:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:35.452 20:07:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.452 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.452 20:07:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.022 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.022 20:07:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:36.022 20:07:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.022 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.022 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.283 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.283 20:07:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:36.283 20:07:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.283 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.283 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.544 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.544 20:07:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:36.544 20:07:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.544 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.544 20:07:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.804 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.804 20:07:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:36.804 20:07:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.804 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.804 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.064 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.064 20:07:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:37.064 20:07:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.064 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.064 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.635 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.635 20:07:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:37.635 20:07:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.635 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.635 20:07:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.635 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4144072 00:15:37.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4144072) - No such process 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 4144072 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:37.897 rmmod nvme_tcp 00:15:37.897 rmmod nvme_fabrics 00:15:37.897 rmmod nvme_keyring 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 4143889 ']' 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 4143889 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 4143889 ']' 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 4143889 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4143889 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4143889' 00:15:37.897 killing process with pid 4143889 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 4143889 00:15:37.897 [2024-05-15 20:07:30.343445] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:37.897 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 4143889 00:15:38.159 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:38.159 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:38.159 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:38.159 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:38.159 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:38.159 20:07:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.159 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.159 20:07:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.086 20:07:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:40.086 00:15:40.086 real 0m22.417s 00:15:40.086 user 0m43.829s 00:15:40.086 sys 0m9.484s 00:15:40.086 20:07:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:40.086 20:07:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.086 ************************************ 00:15:40.086 END TEST nvmf_connect_stress 00:15:40.086 ************************************ 00:15:40.404 20:07:32 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:40.404 20:07:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:40.404 20:07:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:40.404 20:07:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:40.404 ************************************ 00:15:40.404 START TEST nvmf_fused_ordering 00:15:40.404 ************************************ 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:40.404 * Looking for test storage... 00:15:40.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.404 20:07:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:40.405 20:07:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:48.558 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:48.558 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:48.558 Found net devices under 0000:31:00.0: cvl_0_0 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:48.558 Found net devices under 0000:31:00.1: cvl_0_1 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:48.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:15:48.558 00:15:48.558 --- 10.0.0.2 ping statistics --- 00:15:48.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.558 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:48.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:15:48.558 00:15:48.558 --- 10.0.0.1 ping statistics --- 00:15:48.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.558 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=4150779 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 4150779 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 4150779 ']' 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:48.558 20:07:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:48.558 [2024-05-15 20:07:40.986517] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:15:48.558 [2024-05-15 20:07:40.986582] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.558 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.820 [2024-05-15 20:07:41.065061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.820 [2024-05-15 20:07:41.137908] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.820 [2024-05-15 20:07:41.137944] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.820 [2024-05-15 20:07:41.137953] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.820 [2024-05-15 20:07:41.137960] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.820 [2024-05-15 20:07:41.137967] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.820 [2024-05-15 20:07:41.137991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.393 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:49.393 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:15:49.393 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.393 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:49.393 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.393 20:07:41 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.393 20:07:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:49.393 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.393 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.393 [2024-05-15 20:07:41.893603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.654 [2024-05-15 20:07:41.917603] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:49.654 [2024-05-15 20:07:41.917784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.654 NULL1 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.654 20:07:41 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:49.654 [2024-05-15 20:07:41.981624] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:15:49.654 [2024-05-15 20:07:41.981689] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4151128 ] 00:15:49.654 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.915 Attached to nqn.2016-06.io.spdk:cnode1 00:15:49.915 Namespace ID: 1 size: 1GB 00:15:49.915 fused_ordering(0) 00:15:49.915 fused_ordering(1) 00:15:49.915 fused_ordering(2) 00:15:49.915 fused_ordering(3) 00:15:49.915 fused_ordering(4) 00:15:49.915 fused_ordering(5) 00:15:49.915 fused_ordering(6) 00:15:49.915 fused_ordering(7) 00:15:49.915 fused_ordering(8) 00:15:49.915 fused_ordering(9) 00:15:49.915 fused_ordering(10) 00:15:49.915 fused_ordering(11) 00:15:49.915 fused_ordering(12) 00:15:49.915 fused_ordering(13) 00:15:49.915 fused_ordering(14) 00:15:49.915 fused_ordering(15) 00:15:49.915 fused_ordering(16) 00:15:49.915 fused_ordering(17) 00:15:49.915 fused_ordering(18) 00:15:49.915 fused_ordering(19) 00:15:49.915 fused_ordering(20) 00:15:49.915 fused_ordering(21) 00:15:49.915 fused_ordering(22) 00:15:49.915 fused_ordering(23) 00:15:49.915 fused_ordering(24) 00:15:49.915 fused_ordering(25) 00:15:49.915 fused_ordering(26) 00:15:49.915 fused_ordering(27) 00:15:49.915 fused_ordering(28) 00:15:49.915 fused_ordering(29) 00:15:49.915 fused_ordering(30) 00:15:49.915 fused_ordering(31) 00:15:49.915 fused_ordering(32) 00:15:49.915 fused_ordering(33) 00:15:49.915 fused_ordering(34) 00:15:49.915 fused_ordering(35) 00:15:49.915 fused_ordering(36) 00:15:49.915 fused_ordering(37) 00:15:49.915 fused_ordering(38) 00:15:49.915 fused_ordering(39) 00:15:49.915 fused_ordering(40) 00:15:49.915 fused_ordering(41) 00:15:49.915 fused_ordering(42) 00:15:49.915 fused_ordering(43) 00:15:49.915 fused_ordering(44) 00:15:49.915 fused_ordering(45) 00:15:49.915 fused_ordering(46) 00:15:49.915 fused_ordering(47) 00:15:49.915 fused_ordering(48) 00:15:49.915 fused_ordering(49) 00:15:49.915 fused_ordering(50) 00:15:49.915 fused_ordering(51) 00:15:49.915 fused_ordering(52) 00:15:49.915 fused_ordering(53) 00:15:49.915 fused_ordering(54) 00:15:49.915 fused_ordering(55) 00:15:49.915 fused_ordering(56) 00:15:49.915 fused_ordering(57) 00:15:49.915 fused_ordering(58) 00:15:49.915 fused_ordering(59) 00:15:49.915 fused_ordering(60) 00:15:49.915 fused_ordering(61) 00:15:49.915 fused_ordering(62) 00:15:49.915 fused_ordering(63) 00:15:49.915 fused_ordering(64) 00:15:49.915 fused_ordering(65) 00:15:49.915 fused_ordering(66) 00:15:49.915 fused_ordering(67) 00:15:49.915 fused_ordering(68) 00:15:49.915 fused_ordering(69) 00:15:49.915 fused_ordering(70) 00:15:49.915 fused_ordering(71) 00:15:49.915 fused_ordering(72) 00:15:49.915 fused_ordering(73) 00:15:49.915 fused_ordering(74) 00:15:49.915 fused_ordering(75) 00:15:49.915 fused_ordering(76) 00:15:49.915 fused_ordering(77) 00:15:49.915 fused_ordering(78) 00:15:49.915 fused_ordering(79) 00:15:49.915 fused_ordering(80) 00:15:49.915 fused_ordering(81) 00:15:49.915 fused_ordering(82) 00:15:49.915 fused_ordering(83) 00:15:49.915 fused_ordering(84) 00:15:49.915 fused_ordering(85) 00:15:49.915 fused_ordering(86) 00:15:49.915 fused_ordering(87) 00:15:49.915 fused_ordering(88) 00:15:49.915 fused_ordering(89) 00:15:49.915 fused_ordering(90) 00:15:49.915 fused_ordering(91) 00:15:49.915 fused_ordering(92) 00:15:49.915 fused_ordering(93) 00:15:49.915 fused_ordering(94) 00:15:49.915 fused_ordering(95) 00:15:49.915 fused_ordering(96) 00:15:49.915 fused_ordering(97) 00:15:49.915 fused_ordering(98) 00:15:49.915 fused_ordering(99) 00:15:49.915 fused_ordering(100) 00:15:49.915 fused_ordering(101) 00:15:49.915 fused_ordering(102) 00:15:49.915 fused_ordering(103) 00:15:49.915 fused_ordering(104) 00:15:49.915 fused_ordering(105) 00:15:49.915 fused_ordering(106) 00:15:49.915 fused_ordering(107) 00:15:49.915 fused_ordering(108) 00:15:49.915 fused_ordering(109) 00:15:49.915 fused_ordering(110) 00:15:49.915 fused_ordering(111) 00:15:49.915 fused_ordering(112) 00:15:49.915 fused_ordering(113) 00:15:49.915 fused_ordering(114) 00:15:49.915 fused_ordering(115) 00:15:49.915 fused_ordering(116) 00:15:49.915 fused_ordering(117) 00:15:49.915 fused_ordering(118) 00:15:49.915 fused_ordering(119) 00:15:49.915 fused_ordering(120) 00:15:49.915 fused_ordering(121) 00:15:49.915 fused_ordering(122) 00:15:49.915 fused_ordering(123) 00:15:49.915 fused_ordering(124) 00:15:49.915 fused_ordering(125) 00:15:49.915 fused_ordering(126) 00:15:49.915 fused_ordering(127) 00:15:49.915 fused_ordering(128) 00:15:49.915 fused_ordering(129) 00:15:49.915 fused_ordering(130) 00:15:49.915 fused_ordering(131) 00:15:49.915 fused_ordering(132) 00:15:49.915 fused_ordering(133) 00:15:49.915 fused_ordering(134) 00:15:49.915 fused_ordering(135) 00:15:49.915 fused_ordering(136) 00:15:49.915 fused_ordering(137) 00:15:49.915 fused_ordering(138) 00:15:49.915 fused_ordering(139) 00:15:49.915 fused_ordering(140) 00:15:49.915 fused_ordering(141) 00:15:49.915 fused_ordering(142) 00:15:49.915 fused_ordering(143) 00:15:49.915 fused_ordering(144) 00:15:49.915 fused_ordering(145) 00:15:49.915 fused_ordering(146) 00:15:49.915 fused_ordering(147) 00:15:49.915 fused_ordering(148) 00:15:49.915 fused_ordering(149) 00:15:49.915 fused_ordering(150) 00:15:49.915 fused_ordering(151) 00:15:49.915 fused_ordering(152) 00:15:49.915 fused_ordering(153) 00:15:49.915 fused_ordering(154) 00:15:49.915 fused_ordering(155) 00:15:49.915 fused_ordering(156) 00:15:49.915 fused_ordering(157) 00:15:49.915 fused_ordering(158) 00:15:49.915 fused_ordering(159) 00:15:49.915 fused_ordering(160) 00:15:49.915 fused_ordering(161) 00:15:49.915 fused_ordering(162) 00:15:49.915 fused_ordering(163) 00:15:49.915 fused_ordering(164) 00:15:49.915 fused_ordering(165) 00:15:49.915 fused_ordering(166) 00:15:49.915 fused_ordering(167) 00:15:49.915 fused_ordering(168) 00:15:49.915 fused_ordering(169) 00:15:49.915 fused_ordering(170) 00:15:49.915 fused_ordering(171) 00:15:49.915 fused_ordering(172) 00:15:49.915 fused_ordering(173) 00:15:49.915 fused_ordering(174) 00:15:49.915 fused_ordering(175) 00:15:49.915 fused_ordering(176) 00:15:49.915 fused_ordering(177) 00:15:49.915 fused_ordering(178) 00:15:49.915 fused_ordering(179) 00:15:49.915 fused_ordering(180) 00:15:49.915 fused_ordering(181) 00:15:49.915 fused_ordering(182) 00:15:49.915 fused_ordering(183) 00:15:49.915 fused_ordering(184) 00:15:49.915 fused_ordering(185) 00:15:49.915 fused_ordering(186) 00:15:49.915 fused_ordering(187) 00:15:49.915 fused_ordering(188) 00:15:49.915 fused_ordering(189) 00:15:49.915 fused_ordering(190) 00:15:49.915 fused_ordering(191) 00:15:49.915 fused_ordering(192) 00:15:49.915 fused_ordering(193) 00:15:49.915 fused_ordering(194) 00:15:49.915 fused_ordering(195) 00:15:49.915 fused_ordering(196) 00:15:49.915 fused_ordering(197) 00:15:49.915 fused_ordering(198) 00:15:49.915 fused_ordering(199) 00:15:49.915 fused_ordering(200) 00:15:49.915 fused_ordering(201) 00:15:49.915 fused_ordering(202) 00:15:49.915 fused_ordering(203) 00:15:49.915 fused_ordering(204) 00:15:49.915 fused_ordering(205) 00:15:50.486 fused_ordering(206) 00:15:50.486 fused_ordering(207) 00:15:50.486 fused_ordering(208) 00:15:50.486 fused_ordering(209) 00:15:50.486 fused_ordering(210) 00:15:50.486 fused_ordering(211) 00:15:50.486 fused_ordering(212) 00:15:50.486 fused_ordering(213) 00:15:50.486 fused_ordering(214) 00:15:50.486 fused_ordering(215) 00:15:50.486 fused_ordering(216) 00:15:50.486 fused_ordering(217) 00:15:50.486 fused_ordering(218) 00:15:50.486 fused_ordering(219) 00:15:50.486 fused_ordering(220) 00:15:50.486 fused_ordering(221) 00:15:50.486 fused_ordering(222) 00:15:50.486 fused_ordering(223) 00:15:50.486 fused_ordering(224) 00:15:50.486 fused_ordering(225) 00:15:50.486 fused_ordering(226) 00:15:50.486 fused_ordering(227) 00:15:50.486 fused_ordering(228) 00:15:50.486 fused_ordering(229) 00:15:50.486 fused_ordering(230) 00:15:50.486 fused_ordering(231) 00:15:50.486 fused_ordering(232) 00:15:50.486 fused_ordering(233) 00:15:50.486 fused_ordering(234) 00:15:50.486 fused_ordering(235) 00:15:50.486 fused_ordering(236) 00:15:50.486 fused_ordering(237) 00:15:50.486 fused_ordering(238) 00:15:50.486 fused_ordering(239) 00:15:50.486 fused_ordering(240) 00:15:50.486 fused_ordering(241) 00:15:50.486 fused_ordering(242) 00:15:50.486 fused_ordering(243) 00:15:50.486 fused_ordering(244) 00:15:50.486 fused_ordering(245) 00:15:50.486 fused_ordering(246) 00:15:50.486 fused_ordering(247) 00:15:50.486 fused_ordering(248) 00:15:50.486 fused_ordering(249) 00:15:50.486 fused_ordering(250) 00:15:50.486 fused_ordering(251) 00:15:50.486 fused_ordering(252) 00:15:50.486 fused_ordering(253) 00:15:50.486 fused_ordering(254) 00:15:50.486 fused_ordering(255) 00:15:50.486 fused_ordering(256) 00:15:50.486 fused_ordering(257) 00:15:50.486 fused_ordering(258) 00:15:50.486 fused_ordering(259) 00:15:50.486 fused_ordering(260) 00:15:50.486 fused_ordering(261) 00:15:50.486 fused_ordering(262) 00:15:50.486 fused_ordering(263) 00:15:50.486 fused_ordering(264) 00:15:50.486 fused_ordering(265) 00:15:50.486 fused_ordering(266) 00:15:50.486 fused_ordering(267) 00:15:50.486 fused_ordering(268) 00:15:50.486 fused_ordering(269) 00:15:50.486 fused_ordering(270) 00:15:50.486 fused_ordering(271) 00:15:50.486 fused_ordering(272) 00:15:50.486 fused_ordering(273) 00:15:50.486 fused_ordering(274) 00:15:50.486 fused_ordering(275) 00:15:50.486 fused_ordering(276) 00:15:50.486 fused_ordering(277) 00:15:50.486 fused_ordering(278) 00:15:50.486 fused_ordering(279) 00:15:50.486 fused_ordering(280) 00:15:50.486 fused_ordering(281) 00:15:50.486 fused_ordering(282) 00:15:50.486 fused_ordering(283) 00:15:50.486 fused_ordering(284) 00:15:50.486 fused_ordering(285) 00:15:50.486 fused_ordering(286) 00:15:50.486 fused_ordering(287) 00:15:50.486 fused_ordering(288) 00:15:50.486 fused_ordering(289) 00:15:50.486 fused_ordering(290) 00:15:50.486 fused_ordering(291) 00:15:50.486 fused_ordering(292) 00:15:50.486 fused_ordering(293) 00:15:50.486 fused_ordering(294) 00:15:50.486 fused_ordering(295) 00:15:50.486 fused_ordering(296) 00:15:50.486 fused_ordering(297) 00:15:50.486 fused_ordering(298) 00:15:50.486 fused_ordering(299) 00:15:50.486 fused_ordering(300) 00:15:50.486 fused_ordering(301) 00:15:50.486 fused_ordering(302) 00:15:50.486 fused_ordering(303) 00:15:50.486 fused_ordering(304) 00:15:50.486 fused_ordering(305) 00:15:50.486 fused_ordering(306) 00:15:50.486 fused_ordering(307) 00:15:50.486 fused_ordering(308) 00:15:50.486 fused_ordering(309) 00:15:50.486 fused_ordering(310) 00:15:50.486 fused_ordering(311) 00:15:50.486 fused_ordering(312) 00:15:50.486 fused_ordering(313) 00:15:50.486 fused_ordering(314) 00:15:50.486 fused_ordering(315) 00:15:50.486 fused_ordering(316) 00:15:50.486 fused_ordering(317) 00:15:50.486 fused_ordering(318) 00:15:50.486 fused_ordering(319) 00:15:50.486 fused_ordering(320) 00:15:50.486 fused_ordering(321) 00:15:50.486 fused_ordering(322) 00:15:50.486 fused_ordering(323) 00:15:50.486 fused_ordering(324) 00:15:50.486 fused_ordering(325) 00:15:50.486 fused_ordering(326) 00:15:50.486 fused_ordering(327) 00:15:50.486 fused_ordering(328) 00:15:50.486 fused_ordering(329) 00:15:50.486 fused_ordering(330) 00:15:50.486 fused_ordering(331) 00:15:50.486 fused_ordering(332) 00:15:50.486 fused_ordering(333) 00:15:50.486 fused_ordering(334) 00:15:50.486 fused_ordering(335) 00:15:50.486 fused_ordering(336) 00:15:50.486 fused_ordering(337) 00:15:50.486 fused_ordering(338) 00:15:50.486 fused_ordering(339) 00:15:50.486 fused_ordering(340) 00:15:50.486 fused_ordering(341) 00:15:50.486 fused_ordering(342) 00:15:50.486 fused_ordering(343) 00:15:50.486 fused_ordering(344) 00:15:50.486 fused_ordering(345) 00:15:50.486 fused_ordering(346) 00:15:50.486 fused_ordering(347) 00:15:50.487 fused_ordering(348) 00:15:50.487 fused_ordering(349) 00:15:50.487 fused_ordering(350) 00:15:50.487 fused_ordering(351) 00:15:50.487 fused_ordering(352) 00:15:50.487 fused_ordering(353) 00:15:50.487 fused_ordering(354) 00:15:50.487 fused_ordering(355) 00:15:50.487 fused_ordering(356) 00:15:50.487 fused_ordering(357) 00:15:50.487 fused_ordering(358) 00:15:50.487 fused_ordering(359) 00:15:50.487 fused_ordering(360) 00:15:50.487 fused_ordering(361) 00:15:50.487 fused_ordering(362) 00:15:50.487 fused_ordering(363) 00:15:50.487 fused_ordering(364) 00:15:50.487 fused_ordering(365) 00:15:50.487 fused_ordering(366) 00:15:50.487 fused_ordering(367) 00:15:50.487 fused_ordering(368) 00:15:50.487 fused_ordering(369) 00:15:50.487 fused_ordering(370) 00:15:50.487 fused_ordering(371) 00:15:50.487 fused_ordering(372) 00:15:50.487 fused_ordering(373) 00:15:50.487 fused_ordering(374) 00:15:50.487 fused_ordering(375) 00:15:50.487 fused_ordering(376) 00:15:50.487 fused_ordering(377) 00:15:50.487 fused_ordering(378) 00:15:50.487 fused_ordering(379) 00:15:50.487 fused_ordering(380) 00:15:50.487 fused_ordering(381) 00:15:50.487 fused_ordering(382) 00:15:50.487 fused_ordering(383) 00:15:50.487 fused_ordering(384) 00:15:50.487 fused_ordering(385) 00:15:50.487 fused_ordering(386) 00:15:50.487 fused_ordering(387) 00:15:50.487 fused_ordering(388) 00:15:50.487 fused_ordering(389) 00:15:50.487 fused_ordering(390) 00:15:50.487 fused_ordering(391) 00:15:50.487 fused_ordering(392) 00:15:50.487 fused_ordering(393) 00:15:50.487 fused_ordering(394) 00:15:50.487 fused_ordering(395) 00:15:50.487 fused_ordering(396) 00:15:50.487 fused_ordering(397) 00:15:50.487 fused_ordering(398) 00:15:50.487 fused_ordering(399) 00:15:50.487 fused_ordering(400) 00:15:50.487 fused_ordering(401) 00:15:50.487 fused_ordering(402) 00:15:50.487 fused_ordering(403) 00:15:50.487 fused_ordering(404) 00:15:50.487 fused_ordering(405) 00:15:50.487 fused_ordering(406) 00:15:50.487 fused_ordering(407) 00:15:50.487 fused_ordering(408) 00:15:50.487 fused_ordering(409) 00:15:50.487 fused_ordering(410) 00:15:50.746 fused_ordering(411) 00:15:50.746 fused_ordering(412) 00:15:50.746 fused_ordering(413) 00:15:50.746 fused_ordering(414) 00:15:50.746 fused_ordering(415) 00:15:50.746 fused_ordering(416) 00:15:50.746 fused_ordering(417) 00:15:50.746 fused_ordering(418) 00:15:50.746 fused_ordering(419) 00:15:50.746 fused_ordering(420) 00:15:50.746 fused_ordering(421) 00:15:50.746 fused_ordering(422) 00:15:50.746 fused_ordering(423) 00:15:50.746 fused_ordering(424) 00:15:50.746 fused_ordering(425) 00:15:50.746 fused_ordering(426) 00:15:50.746 fused_ordering(427) 00:15:50.746 fused_ordering(428) 00:15:50.746 fused_ordering(429) 00:15:50.746 fused_ordering(430) 00:15:50.746 fused_ordering(431) 00:15:50.746 fused_ordering(432) 00:15:50.746 fused_ordering(433) 00:15:50.746 fused_ordering(434) 00:15:50.746 fused_ordering(435) 00:15:50.746 fused_ordering(436) 00:15:50.746 fused_ordering(437) 00:15:50.746 fused_ordering(438) 00:15:50.746 fused_ordering(439) 00:15:50.746 fused_ordering(440) 00:15:50.746 fused_ordering(441) 00:15:50.746 fused_ordering(442) 00:15:50.746 fused_ordering(443) 00:15:50.746 fused_ordering(444) 00:15:50.746 fused_ordering(445) 00:15:50.746 fused_ordering(446) 00:15:50.746 fused_ordering(447) 00:15:50.746 fused_ordering(448) 00:15:50.746 fused_ordering(449) 00:15:50.746 fused_ordering(450) 00:15:50.746 fused_ordering(451) 00:15:50.746 fused_ordering(452) 00:15:50.746 fused_ordering(453) 00:15:50.746 fused_ordering(454) 00:15:50.746 fused_ordering(455) 00:15:50.746 fused_ordering(456) 00:15:50.746 fused_ordering(457) 00:15:50.746 fused_ordering(458) 00:15:50.746 fused_ordering(459) 00:15:50.746 fused_ordering(460) 00:15:50.746 fused_ordering(461) 00:15:50.746 fused_ordering(462) 00:15:50.746 fused_ordering(463) 00:15:50.746 fused_ordering(464) 00:15:50.746 fused_ordering(465) 00:15:50.746 fused_ordering(466) 00:15:50.746 fused_ordering(467) 00:15:50.746 fused_ordering(468) 00:15:50.746 fused_ordering(469) 00:15:50.746 fused_ordering(470) 00:15:50.746 fused_ordering(471) 00:15:50.746 fused_ordering(472) 00:15:50.746 fused_ordering(473) 00:15:50.746 fused_ordering(474) 00:15:50.746 fused_ordering(475) 00:15:50.746 fused_ordering(476) 00:15:50.746 fused_ordering(477) 00:15:50.746 fused_ordering(478) 00:15:50.746 fused_ordering(479) 00:15:50.746 fused_ordering(480) 00:15:50.746 fused_ordering(481) 00:15:50.746 fused_ordering(482) 00:15:50.746 fused_ordering(483) 00:15:50.746 fused_ordering(484) 00:15:50.746 fused_ordering(485) 00:15:50.746 fused_ordering(486) 00:15:50.746 fused_ordering(487) 00:15:50.746 fused_ordering(488) 00:15:50.746 fused_ordering(489) 00:15:50.746 fused_ordering(490) 00:15:50.746 fused_ordering(491) 00:15:50.746 fused_ordering(492) 00:15:50.746 fused_ordering(493) 00:15:50.746 fused_ordering(494) 00:15:50.746 fused_ordering(495) 00:15:50.746 fused_ordering(496) 00:15:50.746 fused_ordering(497) 00:15:50.746 fused_ordering(498) 00:15:50.746 fused_ordering(499) 00:15:50.746 fused_ordering(500) 00:15:50.746 fused_ordering(501) 00:15:50.746 fused_ordering(502) 00:15:50.746 fused_ordering(503) 00:15:50.746 fused_ordering(504) 00:15:50.746 fused_ordering(505) 00:15:50.746 fused_ordering(506) 00:15:50.746 fused_ordering(507) 00:15:50.746 fused_ordering(508) 00:15:50.746 fused_ordering(509) 00:15:50.746 fused_ordering(510) 00:15:50.746 fused_ordering(511) 00:15:50.746 fused_ordering(512) 00:15:50.746 fused_ordering(513) 00:15:50.746 fused_ordering(514) 00:15:50.746 fused_ordering(515) 00:15:50.746 fused_ordering(516) 00:15:50.746 fused_ordering(517) 00:15:50.746 fused_ordering(518) 00:15:50.746 fused_ordering(519) 00:15:50.746 fused_ordering(520) 00:15:50.746 fused_ordering(521) 00:15:50.746 fused_ordering(522) 00:15:50.746 fused_ordering(523) 00:15:50.746 fused_ordering(524) 00:15:50.746 fused_ordering(525) 00:15:50.746 fused_ordering(526) 00:15:50.747 fused_ordering(527) 00:15:50.747 fused_ordering(528) 00:15:50.747 fused_ordering(529) 00:15:50.747 fused_ordering(530) 00:15:50.747 fused_ordering(531) 00:15:50.747 fused_ordering(532) 00:15:50.747 fused_ordering(533) 00:15:50.747 fused_ordering(534) 00:15:50.747 fused_ordering(535) 00:15:50.747 fused_ordering(536) 00:15:50.747 fused_ordering(537) 00:15:50.747 fused_ordering(538) 00:15:50.747 fused_ordering(539) 00:15:50.747 fused_ordering(540) 00:15:50.747 fused_ordering(541) 00:15:50.747 fused_ordering(542) 00:15:50.747 fused_ordering(543) 00:15:50.747 fused_ordering(544) 00:15:50.747 fused_ordering(545) 00:15:50.747 fused_ordering(546) 00:15:50.747 fused_ordering(547) 00:15:50.747 fused_ordering(548) 00:15:50.747 fused_ordering(549) 00:15:50.747 fused_ordering(550) 00:15:50.747 fused_ordering(551) 00:15:50.747 fused_ordering(552) 00:15:50.747 fused_ordering(553) 00:15:50.747 fused_ordering(554) 00:15:50.747 fused_ordering(555) 00:15:50.747 fused_ordering(556) 00:15:50.747 fused_ordering(557) 00:15:50.747 fused_ordering(558) 00:15:50.747 fused_ordering(559) 00:15:50.747 fused_ordering(560) 00:15:50.747 fused_ordering(561) 00:15:50.747 fused_ordering(562) 00:15:50.747 fused_ordering(563) 00:15:50.747 fused_ordering(564) 00:15:50.747 fused_ordering(565) 00:15:50.747 fused_ordering(566) 00:15:50.747 fused_ordering(567) 00:15:50.747 fused_ordering(568) 00:15:50.747 fused_ordering(569) 00:15:50.747 fused_ordering(570) 00:15:50.747 fused_ordering(571) 00:15:50.747 fused_ordering(572) 00:15:50.747 fused_ordering(573) 00:15:50.747 fused_ordering(574) 00:15:50.747 fused_ordering(575) 00:15:50.747 fused_ordering(576) 00:15:50.747 fused_ordering(577) 00:15:50.747 fused_ordering(578) 00:15:50.747 fused_ordering(579) 00:15:50.747 fused_ordering(580) 00:15:50.747 fused_ordering(581) 00:15:50.747 fused_ordering(582) 00:15:50.747 fused_ordering(583) 00:15:50.747 fused_ordering(584) 00:15:50.747 fused_ordering(585) 00:15:50.747 fused_ordering(586) 00:15:50.747 fused_ordering(587) 00:15:50.747 fused_ordering(588) 00:15:50.747 fused_ordering(589) 00:15:50.747 fused_ordering(590) 00:15:50.747 fused_ordering(591) 00:15:50.747 fused_ordering(592) 00:15:50.747 fused_ordering(593) 00:15:50.747 fused_ordering(594) 00:15:50.747 fused_ordering(595) 00:15:50.747 fused_ordering(596) 00:15:50.747 fused_ordering(597) 00:15:50.747 fused_ordering(598) 00:15:50.747 fused_ordering(599) 00:15:50.747 fused_ordering(600) 00:15:50.747 fused_ordering(601) 00:15:50.747 fused_ordering(602) 00:15:50.747 fused_ordering(603) 00:15:50.747 fused_ordering(604) 00:15:50.747 fused_ordering(605) 00:15:50.747 fused_ordering(606) 00:15:50.747 fused_ordering(607) 00:15:50.747 fused_ordering(608) 00:15:50.747 fused_ordering(609) 00:15:50.747 fused_ordering(610) 00:15:50.747 fused_ordering(611) 00:15:50.747 fused_ordering(612) 00:15:50.747 fused_ordering(613) 00:15:50.747 fused_ordering(614) 00:15:50.747 fused_ordering(615) 00:15:51.690 fused_ordering(616) 00:15:51.690 fused_ordering(617) 00:15:51.690 fused_ordering(618) 00:15:51.690 fused_ordering(619) 00:15:51.690 fused_ordering(620) 00:15:51.690 fused_ordering(621) 00:15:51.690 fused_ordering(622) 00:15:51.690 fused_ordering(623) 00:15:51.690 fused_ordering(624) 00:15:51.690 fused_ordering(625) 00:15:51.690 fused_ordering(626) 00:15:51.690 fused_ordering(627) 00:15:51.690 fused_ordering(628) 00:15:51.690 fused_ordering(629) 00:15:51.690 fused_ordering(630) 00:15:51.690 fused_ordering(631) 00:15:51.690 fused_ordering(632) 00:15:51.690 fused_ordering(633) 00:15:51.690 fused_ordering(634) 00:15:51.690 fused_ordering(635) 00:15:51.690 fused_ordering(636) 00:15:51.690 fused_ordering(637) 00:15:51.690 fused_ordering(638) 00:15:51.690 fused_ordering(639) 00:15:51.690 fused_ordering(640) 00:15:51.690 fused_ordering(641) 00:15:51.690 fused_ordering(642) 00:15:51.690 fused_ordering(643) 00:15:51.690 fused_ordering(644) 00:15:51.690 fused_ordering(645) 00:15:51.690 fused_ordering(646) 00:15:51.690 fused_ordering(647) 00:15:51.690 fused_ordering(648) 00:15:51.690 fused_ordering(649) 00:15:51.690 fused_ordering(650) 00:15:51.690 fused_ordering(651) 00:15:51.690 fused_ordering(652) 00:15:51.690 fused_ordering(653) 00:15:51.690 fused_ordering(654) 00:15:51.690 fused_ordering(655) 00:15:51.690 fused_ordering(656) 00:15:51.690 fused_ordering(657) 00:15:51.690 fused_ordering(658) 00:15:51.690 fused_ordering(659) 00:15:51.690 fused_ordering(660) 00:15:51.690 fused_ordering(661) 00:15:51.690 fused_ordering(662) 00:15:51.690 fused_ordering(663) 00:15:51.690 fused_ordering(664) 00:15:51.690 fused_ordering(665) 00:15:51.690 fused_ordering(666) 00:15:51.690 fused_ordering(667) 00:15:51.690 fused_ordering(668) 00:15:51.690 fused_ordering(669) 00:15:51.690 fused_ordering(670) 00:15:51.690 fused_ordering(671) 00:15:51.690 fused_ordering(672) 00:15:51.690 fused_ordering(673) 00:15:51.690 fused_ordering(674) 00:15:51.690 fused_ordering(675) 00:15:51.690 fused_ordering(676) 00:15:51.690 fused_ordering(677) 00:15:51.690 fused_ordering(678) 00:15:51.690 fused_ordering(679) 00:15:51.690 fused_ordering(680) 00:15:51.690 fused_ordering(681) 00:15:51.690 fused_ordering(682) 00:15:51.690 fused_ordering(683) 00:15:51.690 fused_ordering(684) 00:15:51.690 fused_ordering(685) 00:15:51.690 fused_ordering(686) 00:15:51.690 fused_ordering(687) 00:15:51.690 fused_ordering(688) 00:15:51.690 fused_ordering(689) 00:15:51.690 fused_ordering(690) 00:15:51.690 fused_ordering(691) 00:15:51.690 fused_ordering(692) 00:15:51.690 fused_ordering(693) 00:15:51.690 fused_ordering(694) 00:15:51.690 fused_ordering(695) 00:15:51.690 fused_ordering(696) 00:15:51.690 fused_ordering(697) 00:15:51.690 fused_ordering(698) 00:15:51.690 fused_ordering(699) 00:15:51.690 fused_ordering(700) 00:15:51.690 fused_ordering(701) 00:15:51.690 fused_ordering(702) 00:15:51.690 fused_ordering(703) 00:15:51.690 fused_ordering(704) 00:15:51.690 fused_ordering(705) 00:15:51.690 fused_ordering(706) 00:15:51.690 fused_ordering(707) 00:15:51.690 fused_ordering(708) 00:15:51.690 fused_ordering(709) 00:15:51.690 fused_ordering(710) 00:15:51.690 fused_ordering(711) 00:15:51.690 fused_ordering(712) 00:15:51.690 fused_ordering(713) 00:15:51.690 fused_ordering(714) 00:15:51.690 fused_ordering(715) 00:15:51.690 fused_ordering(716) 00:15:51.690 fused_ordering(717) 00:15:51.690 fused_ordering(718) 00:15:51.690 fused_ordering(719) 00:15:51.690 fused_ordering(720) 00:15:51.690 fused_ordering(721) 00:15:51.690 fused_ordering(722) 00:15:51.690 fused_ordering(723) 00:15:51.690 fused_ordering(724) 00:15:51.690 fused_ordering(725) 00:15:51.690 fused_ordering(726) 00:15:51.690 fused_ordering(727) 00:15:51.690 fused_ordering(728) 00:15:51.690 fused_ordering(729) 00:15:51.690 fused_ordering(730) 00:15:51.690 fused_ordering(731) 00:15:51.690 fused_ordering(732) 00:15:51.690 fused_ordering(733) 00:15:51.690 fused_ordering(734) 00:15:51.690 fused_ordering(735) 00:15:51.690 fused_ordering(736) 00:15:51.690 fused_ordering(737) 00:15:51.690 fused_ordering(738) 00:15:51.690 fused_ordering(739) 00:15:51.690 fused_ordering(740) 00:15:51.690 fused_ordering(741) 00:15:51.690 fused_ordering(742) 00:15:51.690 fused_ordering(743) 00:15:51.690 fused_ordering(744) 00:15:51.690 fused_ordering(745) 00:15:51.690 fused_ordering(746) 00:15:51.690 fused_ordering(747) 00:15:51.690 fused_ordering(748) 00:15:51.690 fused_ordering(749) 00:15:51.690 fused_ordering(750) 00:15:51.690 fused_ordering(751) 00:15:51.690 fused_ordering(752) 00:15:51.690 fused_ordering(753) 00:15:51.690 fused_ordering(754) 00:15:51.690 fused_ordering(755) 00:15:51.690 fused_ordering(756) 00:15:51.690 fused_ordering(757) 00:15:51.690 fused_ordering(758) 00:15:51.690 fused_ordering(759) 00:15:51.690 fused_ordering(760) 00:15:51.690 fused_ordering(761) 00:15:51.690 fused_ordering(762) 00:15:51.690 fused_ordering(763) 00:15:51.690 fused_ordering(764) 00:15:51.690 fused_ordering(765) 00:15:51.690 fused_ordering(766) 00:15:51.690 fused_ordering(767) 00:15:51.690 fused_ordering(768) 00:15:51.690 fused_ordering(769) 00:15:51.690 fused_ordering(770) 00:15:51.690 fused_ordering(771) 00:15:51.690 fused_ordering(772) 00:15:51.690 fused_ordering(773) 00:15:51.690 fused_ordering(774) 00:15:51.690 fused_ordering(775) 00:15:51.690 fused_ordering(776) 00:15:51.690 fused_ordering(777) 00:15:51.690 fused_ordering(778) 00:15:51.690 fused_ordering(779) 00:15:51.690 fused_ordering(780) 00:15:51.690 fused_ordering(781) 00:15:51.690 fused_ordering(782) 00:15:51.690 fused_ordering(783) 00:15:51.690 fused_ordering(784) 00:15:51.690 fused_ordering(785) 00:15:51.690 fused_ordering(786) 00:15:51.690 fused_ordering(787) 00:15:51.690 fused_ordering(788) 00:15:51.690 fused_ordering(789) 00:15:51.690 fused_ordering(790) 00:15:51.690 fused_ordering(791) 00:15:51.690 fused_ordering(792) 00:15:51.690 fused_ordering(793) 00:15:51.690 fused_ordering(794) 00:15:51.690 fused_ordering(795) 00:15:51.690 fused_ordering(796) 00:15:51.690 fused_ordering(797) 00:15:51.690 fused_ordering(798) 00:15:51.690 fused_ordering(799) 00:15:51.690 fused_ordering(800) 00:15:51.690 fused_ordering(801) 00:15:51.690 fused_ordering(802) 00:15:51.690 fused_ordering(803) 00:15:51.690 fused_ordering(804) 00:15:51.690 fused_ordering(805) 00:15:51.690 fused_ordering(806) 00:15:51.690 fused_ordering(807) 00:15:51.690 fused_ordering(808) 00:15:51.690 fused_ordering(809) 00:15:51.690 fused_ordering(810) 00:15:51.690 fused_ordering(811) 00:15:51.690 fused_ordering(812) 00:15:51.690 fused_ordering(813) 00:15:51.690 fused_ordering(814) 00:15:51.690 fused_ordering(815) 00:15:51.690 fused_ordering(816) 00:15:51.691 fused_ordering(817) 00:15:51.691 fused_ordering(818) 00:15:51.691 fused_ordering(819) 00:15:51.691 fused_ordering(820) 00:15:52.280 fused_ordering(821) 00:15:52.280 fused_ordering(822) 00:15:52.280 fused_ordering(823) 00:15:52.280 fused_ordering(824) 00:15:52.280 fused_ordering(825) 00:15:52.280 fused_ordering(826) 00:15:52.280 fused_ordering(827) 00:15:52.280 fused_ordering(828) 00:15:52.280 fused_ordering(829) 00:15:52.280 fused_ordering(830) 00:15:52.280 fused_ordering(831) 00:15:52.280 fused_ordering(832) 00:15:52.280 fused_ordering(833) 00:15:52.280 fused_ordering(834) 00:15:52.280 fused_ordering(835) 00:15:52.280 fused_ordering(836) 00:15:52.280 fused_ordering(837) 00:15:52.280 fused_ordering(838) 00:15:52.280 fused_ordering(839) 00:15:52.280 fused_ordering(840) 00:15:52.280 fused_ordering(841) 00:15:52.280 fused_ordering(842) 00:15:52.280 fused_ordering(843) 00:15:52.280 fused_ordering(844) 00:15:52.280 fused_ordering(845) 00:15:52.280 fused_ordering(846) 00:15:52.280 fused_ordering(847) 00:15:52.280 fused_ordering(848) 00:15:52.280 fused_ordering(849) 00:15:52.280 fused_ordering(850) 00:15:52.280 fused_ordering(851) 00:15:52.280 fused_ordering(852) 00:15:52.280 fused_ordering(853) 00:15:52.280 fused_ordering(854) 00:15:52.280 fused_ordering(855) 00:15:52.280 fused_ordering(856) 00:15:52.280 fused_ordering(857) 00:15:52.280 fused_ordering(858) 00:15:52.280 fused_ordering(859) 00:15:52.280 fused_ordering(860) 00:15:52.280 fused_ordering(861) 00:15:52.280 fused_ordering(862) 00:15:52.280 fused_ordering(863) 00:15:52.280 fused_ordering(864) 00:15:52.280 fused_ordering(865) 00:15:52.280 fused_ordering(866) 00:15:52.280 fused_ordering(867) 00:15:52.280 fused_ordering(868) 00:15:52.280 fused_ordering(869) 00:15:52.280 fused_ordering(870) 00:15:52.280 fused_ordering(871) 00:15:52.280 fused_ordering(872) 00:15:52.280 fused_ordering(873) 00:15:52.280 fused_ordering(874) 00:15:52.280 fused_ordering(875) 00:15:52.280 fused_ordering(876) 00:15:52.280 fused_ordering(877) 00:15:52.280 fused_ordering(878) 00:15:52.280 fused_ordering(879) 00:15:52.280 fused_ordering(880) 00:15:52.280 fused_ordering(881) 00:15:52.280 fused_ordering(882) 00:15:52.280 fused_ordering(883) 00:15:52.280 fused_ordering(884) 00:15:52.280 fused_ordering(885) 00:15:52.280 fused_ordering(886) 00:15:52.280 fused_ordering(887) 00:15:52.280 fused_ordering(888) 00:15:52.280 fused_ordering(889) 00:15:52.280 fused_ordering(890) 00:15:52.280 fused_ordering(891) 00:15:52.280 fused_ordering(892) 00:15:52.280 fused_ordering(893) 00:15:52.280 fused_ordering(894) 00:15:52.280 fused_ordering(895) 00:15:52.280 fused_ordering(896) 00:15:52.280 fused_ordering(897) 00:15:52.280 fused_ordering(898) 00:15:52.280 fused_ordering(899) 00:15:52.280 fused_ordering(900) 00:15:52.280 fused_ordering(901) 00:15:52.280 fused_ordering(902) 00:15:52.280 fused_ordering(903) 00:15:52.280 fused_ordering(904) 00:15:52.280 fused_ordering(905) 00:15:52.280 fused_ordering(906) 00:15:52.280 fused_ordering(907) 00:15:52.280 fused_ordering(908) 00:15:52.280 fused_ordering(909) 00:15:52.280 fused_ordering(910) 00:15:52.280 fused_ordering(911) 00:15:52.280 fused_ordering(912) 00:15:52.280 fused_ordering(913) 00:15:52.280 fused_ordering(914) 00:15:52.280 fused_ordering(915) 00:15:52.280 fused_ordering(916) 00:15:52.280 fused_ordering(917) 00:15:52.280 fused_ordering(918) 00:15:52.280 fused_ordering(919) 00:15:52.280 fused_ordering(920) 00:15:52.280 fused_ordering(921) 00:15:52.280 fused_ordering(922) 00:15:52.280 fused_ordering(923) 00:15:52.280 fused_ordering(924) 00:15:52.280 fused_ordering(925) 00:15:52.280 fused_ordering(926) 00:15:52.280 fused_ordering(927) 00:15:52.280 fused_ordering(928) 00:15:52.280 fused_ordering(929) 00:15:52.280 fused_ordering(930) 00:15:52.280 fused_ordering(931) 00:15:52.280 fused_ordering(932) 00:15:52.280 fused_ordering(933) 00:15:52.280 fused_ordering(934) 00:15:52.280 fused_ordering(935) 00:15:52.280 fused_ordering(936) 00:15:52.280 fused_ordering(937) 00:15:52.280 fused_ordering(938) 00:15:52.280 fused_ordering(939) 00:15:52.280 fused_ordering(940) 00:15:52.280 fused_ordering(941) 00:15:52.280 fused_ordering(942) 00:15:52.280 fused_ordering(943) 00:15:52.280 fused_ordering(944) 00:15:52.280 fused_ordering(945) 00:15:52.280 fused_ordering(946) 00:15:52.280 fused_ordering(947) 00:15:52.280 fused_ordering(948) 00:15:52.280 fused_ordering(949) 00:15:52.280 fused_ordering(950) 00:15:52.280 fused_ordering(951) 00:15:52.280 fused_ordering(952) 00:15:52.280 fused_ordering(953) 00:15:52.280 fused_ordering(954) 00:15:52.280 fused_ordering(955) 00:15:52.280 fused_ordering(956) 00:15:52.280 fused_ordering(957) 00:15:52.280 fused_ordering(958) 00:15:52.280 fused_ordering(959) 00:15:52.280 fused_ordering(960) 00:15:52.280 fused_ordering(961) 00:15:52.280 fused_ordering(962) 00:15:52.280 fused_ordering(963) 00:15:52.280 fused_ordering(964) 00:15:52.280 fused_ordering(965) 00:15:52.280 fused_ordering(966) 00:15:52.280 fused_ordering(967) 00:15:52.280 fused_ordering(968) 00:15:52.280 fused_ordering(969) 00:15:52.280 fused_ordering(970) 00:15:52.280 fused_ordering(971) 00:15:52.280 fused_ordering(972) 00:15:52.280 fused_ordering(973) 00:15:52.280 fused_ordering(974) 00:15:52.280 fused_ordering(975) 00:15:52.280 fused_ordering(976) 00:15:52.280 fused_ordering(977) 00:15:52.280 fused_ordering(978) 00:15:52.280 fused_ordering(979) 00:15:52.280 fused_ordering(980) 00:15:52.280 fused_ordering(981) 00:15:52.280 fused_ordering(982) 00:15:52.280 fused_ordering(983) 00:15:52.280 fused_ordering(984) 00:15:52.280 fused_ordering(985) 00:15:52.280 fused_ordering(986) 00:15:52.280 fused_ordering(987) 00:15:52.280 fused_ordering(988) 00:15:52.280 fused_ordering(989) 00:15:52.280 fused_ordering(990) 00:15:52.280 fused_ordering(991) 00:15:52.280 fused_ordering(992) 00:15:52.280 fused_ordering(993) 00:15:52.280 fused_ordering(994) 00:15:52.280 fused_ordering(995) 00:15:52.280 fused_ordering(996) 00:15:52.280 fused_ordering(997) 00:15:52.280 fused_ordering(998) 00:15:52.280 fused_ordering(999) 00:15:52.280 fused_ordering(1000) 00:15:52.280 fused_ordering(1001) 00:15:52.280 fused_ordering(1002) 00:15:52.280 fused_ordering(1003) 00:15:52.280 fused_ordering(1004) 00:15:52.280 fused_ordering(1005) 00:15:52.280 fused_ordering(1006) 00:15:52.280 fused_ordering(1007) 00:15:52.280 fused_ordering(1008) 00:15:52.280 fused_ordering(1009) 00:15:52.280 fused_ordering(1010) 00:15:52.280 fused_ordering(1011) 00:15:52.280 fused_ordering(1012) 00:15:52.280 fused_ordering(1013) 00:15:52.280 fused_ordering(1014) 00:15:52.280 fused_ordering(1015) 00:15:52.280 fused_ordering(1016) 00:15:52.280 fused_ordering(1017) 00:15:52.281 fused_ordering(1018) 00:15:52.281 fused_ordering(1019) 00:15:52.281 fused_ordering(1020) 00:15:52.281 fused_ordering(1021) 00:15:52.281 fused_ordering(1022) 00:15:52.281 fused_ordering(1023) 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:52.281 rmmod nvme_tcp 00:15:52.281 rmmod nvme_fabrics 00:15:52.281 rmmod nvme_keyring 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 4150779 ']' 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 4150779 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 4150779 ']' 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 4150779 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4150779 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4150779' 00:15:52.281 killing process with pid 4150779 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 4150779 00:15:52.281 [2024-05-15 20:07:44.623811] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 4150779 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.281 20:07:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.827 20:07:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:54.827 00:15:54.827 real 0m14.202s 00:15:54.827 user 0m7.355s 00:15:54.827 sys 0m7.796s 00:15:54.827 20:07:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:54.827 20:07:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:54.827 ************************************ 00:15:54.827 END TEST nvmf_fused_ordering 00:15:54.827 ************************************ 00:15:54.827 20:07:46 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:54.827 20:07:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:54.827 20:07:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:54.827 20:07:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:54.827 ************************************ 00:15:54.827 START TEST nvmf_delete_subsystem 00:15:54.827 ************************************ 00:15:54.827 20:07:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:54.827 * Looking for test storage... 00:15:54.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.827 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.827 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:15:54.827 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.827 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:15:54.828 20:07:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:02.973 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:02.973 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:02.973 Found net devices under 0000:31:00.0: cvl_0_0 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.973 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:02.974 Found net devices under 0000:31:00.1: cvl_0_1 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:02.974 20:07:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:02.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:16:02.974 00:16:02.974 --- 10.0.0.2 ping statistics --- 00:16:02.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.974 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:02.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:16:02.974 00:16:02.974 --- 10.0.0.1 ping statistics --- 00:16:02.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.974 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=4156264 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 4156264 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 4156264 ']' 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:02.974 20:07:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:02.974 [2024-05-15 20:07:55.235082] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:16:02.974 [2024-05-15 20:07:55.235144] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.974 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.974 [2024-05-15 20:07:55.316466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:02.974 [2024-05-15 20:07:55.412329] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.974 [2024-05-15 20:07:55.412391] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.974 [2024-05-15 20:07:55.412399] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.974 [2024-05-15 20:07:55.412406] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.974 [2024-05-15 20:07:55.412412] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.974 [2024-05-15 20:07:55.412545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.974 [2024-05-15 20:07:55.412706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:03.917 [2024-05-15 20:07:56.163064] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:03.917 [2024-05-15 20:07:56.187057] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:03.917 [2024-05-15 20:07:56.187242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:03.917 NULL1 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:03.917 Delay0 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4156512 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:03.917 20:07:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:03.917 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.917 [2024-05-15 20:07:56.283925] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:05.828 20:07:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:05.828 20:07:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.828 20:07:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 starting I/O failed: -6 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 starting I/O failed: -6 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 starting I/O failed: -6 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 starting I/O failed: -6 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 starting I/O failed: -6 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 starting I/O failed: -6 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 starting I/O failed: -6 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 starting I/O failed: -6 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 starting I/O failed: -6 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 starting I/O failed: -6 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 [2024-05-15 20:07:58.367226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f00c90 is same with the state(5) to be set 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Write completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.089 Read completed with error (sct=0, sc=8) 00:16:06.090 [2024-05-15 20:07:58.368409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f15c90 is same with the state(5) to be set 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 Write completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 Read completed with error (sct=0, sc=8) 00:16:06.090 starting I/O failed: -6 00:16:06.090 starting I/O failed: -6 00:16:06.090 starting I/O failed: -6 00:16:06.090 starting I/O failed: -6 00:16:06.090 starting I/O failed: -6 00:16:06.090 starting I/O failed: -6 00:16:06.090 starting I/O failed: -6 00:16:06.090 starting I/O failed: -6 00:16:06.090 starting I/O failed: -6 00:16:06.090 starting I/O failed: -6 00:16:07.032 [2024-05-15 20:07:59.341519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1e250 is same with the state(5) to be set 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 [2024-05-15 20:07:59.370500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f290 is same with the state(5) to be set 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 [2024-05-15 20:07:59.370967] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eff8b0 is same with the state(5) to be set 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 [2024-05-15 20:07:59.374257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe09800c780 is same with the state(5) to be set 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Read completed with error (sct=0, sc=8) 00:16:07.032 Write completed with error (sct=0, sc=8) 00:16:07.032 [2024-05-15 20:07:59.374452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe09800bfe0 is same with the state(5) to be set 00:16:07.032 Initializing NVMe Controllers 00:16:07.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:07.032 Controller IO queue size 128, less than required. 00:16:07.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:07.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:07.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:07.032 Initialization complete. Launching workers. 00:16:07.032 ======================================================== 00:16:07.032 Latency(us) 00:16:07.032 Device Information : IOPS MiB/s Average min max 00:16:07.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.42 0.08 911084.55 451.30 1006547.45 00:16:07.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 179.85 0.09 920701.88 390.06 1009584.75 00:16:07.032 ======================================================== 00:16:07.032 Total : 342.27 0.17 916138.20 390.06 1009584.75 00:16:07.032 00:16:07.032 [2024-05-15 20:07:59.375015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1e250 (9): Bad file descriptor 00:16:07.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:16:07.032 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.032 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:16:07.032 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4156512 00:16:07.032 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4156512 00:16:07.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4156512) - No such process 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4156512 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 4156512 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 4156512 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:07.604 [2024-05-15 20:07:59.903922] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4157186 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157186 00:16:07.604 20:07:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:07.604 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.604 [2024-05-15 20:07:59.974540] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:08.174 20:08:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:08.174 20:08:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157186 00:16:08.174 20:08:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:08.434 20:08:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:08.434 20:08:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157186 00:16:08.434 20:08:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:09.005 20:08:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:09.005 20:08:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157186 00:16:09.005 20:08:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:09.575 20:08:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:09.575 20:08:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157186 00:16:09.575 20:08:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:10.146 20:08:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:10.146 20:08:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157186 00:16:10.146 20:08:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:10.715 20:08:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:10.715 20:08:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157186 00:16:10.715 20:08:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:10.715 Initializing NVMe Controllers 00:16:10.715 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:10.715 Controller IO queue size 128, less than required. 00:16:10.715 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:10.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:10.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:10.715 Initialization complete. Launching workers. 00:16:10.715 ======================================================== 00:16:10.715 Latency(us) 00:16:10.715 Device Information : IOPS MiB/s Average min max 00:16:10.715 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002278.28 1000317.37 1005716.85 00:16:10.715 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003979.09 1000413.60 1009634.97 00:16:10.715 ======================================================== 00:16:10.715 Total : 256.00 0.12 1003128.68 1000317.37 1009634.97 00:16:10.715 00:16:10.715 [2024-05-15 20:08:03.070210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5860 is same with the state(5) to be set 00:16:10.977 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:10.977 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4157186 00:16:10.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4157186) - No such process 00:16:10.977 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4157186 00:16:10.977 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:10.977 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:10.977 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:10.977 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:16:10.977 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:10.977 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:16:10.977 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:10.977 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:10.977 rmmod nvme_tcp 00:16:11.238 rmmod nvme_fabrics 00:16:11.238 rmmod nvme_keyring 00:16:11.238 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:11.238 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:16:11.238 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:16:11.238 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 4156264 ']' 00:16:11.238 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 4156264 00:16:11.238 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 4156264 ']' 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 4156264 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4156264 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4156264' 00:16:11.239 killing process with pid 4156264 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 4156264 00:16:11.239 [2024-05-15 20:08:03.584169] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 4156264 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.239 20:08:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.785 20:08:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:13.785 00:16:13.785 real 0m18.875s 00:16:13.785 user 0m31.043s 00:16:13.785 sys 0m6.875s 00:16:13.785 20:08:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:13.785 20:08:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:13.785 ************************************ 00:16:13.785 END TEST nvmf_delete_subsystem 00:16:13.785 ************************************ 00:16:13.785 20:08:05 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:13.785 20:08:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:13.785 20:08:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:13.785 20:08:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:13.785 ************************************ 00:16:13.785 START TEST nvmf_ns_masking 00:16:13.785 ************************************ 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:13.785 * Looking for test storage... 00:16:13.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.785 20:08:05 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:13.785 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=fb67ce5e-5f8d-46b5-9257-35bfcc4d474c 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:13.786 20:08:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:21.936 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:21.936 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:21.936 Found net devices under 0000:31:00.0: cvl_0_0 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:21.936 Found net devices under 0000:31:00.1: cvl_0_1 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:21.936 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.937 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:21.937 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:21.937 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:21.937 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:21.937 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:21.937 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:21.937 20:08:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:21.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:16:21.937 00:16:21.937 --- 10.0.0.2 ping statistics --- 00:16:21.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.937 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:21.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:16:21.937 00:16:21.937 --- 10.0.0.1 ping statistics --- 00:16:21.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.937 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=4162543 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 4162543 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 4162543 ']' 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:21.937 20:08:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:21.937 [2024-05-15 20:08:14.241310] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:16:21.937 [2024-05-15 20:08:14.241367] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.937 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.937 [2024-05-15 20:08:14.333271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.937 [2024-05-15 20:08:14.407949] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.937 [2024-05-15 20:08:14.408000] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.937 [2024-05-15 20:08:14.408009] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.937 [2024-05-15 20:08:14.408015] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.937 [2024-05-15 20:08:14.408021] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.937 [2024-05-15 20:08:14.408137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.937 [2024-05-15 20:08:14.408282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.937 [2024-05-15 20:08:14.408434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.937 [2024-05-15 20:08:14.408435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.880 20:08:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:22.880 20:08:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:16:22.880 20:08:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:22.880 20:08:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:22.880 20:08:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:22.880 20:08:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.880 20:08:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:22.880 [2024-05-15 20:08:15.344812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.880 20:08:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:16:22.880 20:08:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:16:22.880 20:08:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:23.141 Malloc1 00:16:23.141 20:08:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:23.401 Malloc2 00:16:23.401 20:08:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:23.662 20:08:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:23.923 20:08:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.184 [2024-05-15 20:08:16.425662] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:24.184 [2024-05-15 20:08:16.425928] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.184 20:08:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:16:24.184 20:08:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fb67ce5e-5f8d-46b5-9257-35bfcc4d474c -a 10.0.0.2 -s 4420 -i 4 00:16:24.184 20:08:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:16:24.184 20:08:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:16:24.184 20:08:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.184 20:08:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:16:24.184 20:08:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:26.731 [ 0]:0x1 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7c4e13459d5d44d0bb28f6711df0297b 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7c4e13459d5d44d0bb28f6711df0297b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:26.731 20:08:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:26.731 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:16:26.731 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:26.731 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:26.731 [ 0]:0x1 00:16:26.731 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:26.731 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:26.731 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7c4e13459d5d44d0bb28f6711df0297b 00:16:26.731 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7c4e13459d5d44d0bb28f6711df0297b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:26.731 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:16:26.731 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:26.731 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:26.731 [ 1]:0x2 00:16:26.731 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:26.731 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:26.731 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c6a677f1d95e478fa02a8e808383a591 00:16:26.732 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c6a677f1d95e478fa02a8e808383a591 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:26.732 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:16:26.732 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:26.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.992 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:26.992 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:27.253 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:16:27.253 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fb67ce5e-5f8d-46b5-9257-35bfcc4d474c -a 10.0.0.2 -s 4420 -i 4 00:16:27.514 20:08:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:27.514 20:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:16:27.514 20:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:27.514 20:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:16:27.514 20:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:16:27.514 20:08:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:16:29.507 20:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:29.507 20:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:29.507 20:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:29.507 20:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:29.507 20:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:29.507 20:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:16:29.507 20:08:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:29.507 20:08:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:29.507 20:08:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:29.507 20:08:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:29.507 20:08:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:16:29.508 20:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:29.508 20:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:29.508 20:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:29.508 20:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:29.508 20:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:29.508 20:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:29.508 20:08:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:29.508 20:08:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:29.508 20:08:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:29.769 [ 0]:0x2 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c6a677f1d95e478fa02a8e808383a591 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c6a677f1d95e478fa02a8e808383a591 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:29.769 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:30.030 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:16:30.030 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:30.030 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:30.030 [ 0]:0x1 00:16:30.030 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:30.030 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:30.030 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7c4e13459d5d44d0bb28f6711df0297b 00:16:30.030 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7c4e13459d5d44d0bb28f6711df0297b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.030 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:16:30.030 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:30.030 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:30.030 [ 1]:0x2 00:16:30.030 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:30.030 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:30.030 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c6a677f1d95e478fa02a8e808383a591 00:16:30.030 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c6a677f1d95e478fa02a8e808383a591 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.030 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:30.291 [ 0]:0x2 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:30.291 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:30.552 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c6a677f1d95e478fa02a8e808383a591 00:16:30.552 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c6a677f1d95e478fa02a8e808383a591 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:30.552 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:16:30.552 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:30.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.552 20:08:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:30.813 20:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:16:30.813 20:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I fb67ce5e-5f8d-46b5-9257-35bfcc4d474c -a 10.0.0.2 -s 4420 -i 4 00:16:30.813 20:08:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:30.813 20:08:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:16:30.813 20:08:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:30.813 20:08:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:16:30.813 20:08:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:16:30.813 20:08:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:33.359 [ 0]:0x1 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7c4e13459d5d44d0bb28f6711df0297b 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7c4e13459d5d44d0bb28f6711df0297b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:33.359 [ 1]:0x2 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c6a677f1d95e478fa02a8e808383a591 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c6a677f1d95e478fa02a8e808383a591 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.359 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:33.619 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:16:33.619 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:33.619 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:33.619 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:33.619 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:33.620 [ 0]:0x2 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c6a677f1d95e478fa02a8e808383a591 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c6a677f1d95e478fa02a8e808383a591 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:33.620 20:08:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:33.881 [2024-05-15 20:08:26.177638] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:33.881 request: 00:16:33.881 { 00:16:33.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.881 "nsid": 2, 00:16:33.881 "host": "nqn.2016-06.io.spdk:host1", 00:16:33.881 "method": "nvmf_ns_remove_host", 00:16:33.881 "req_id": 1 00:16:33.881 } 00:16:33.881 Got JSON-RPC error response 00:16:33.881 response: 00:16:33.881 { 00:16:33.881 "code": -32602, 00:16:33.881 "message": "Invalid parameters" 00:16:33.881 } 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:33.881 [ 0]:0x2 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=c6a677f1d95e478fa02a8e808383a591 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ c6a677f1d95e478fa02a8e808383a591 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:33.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.881 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:34.150 rmmod nvme_tcp 00:16:34.150 rmmod nvme_fabrics 00:16:34.150 rmmod nvme_keyring 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 4162543 ']' 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 4162543 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 4162543 ']' 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 4162543 00:16:34.150 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:16:34.411 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:34.411 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4162543 00:16:34.411 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:34.411 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:34.411 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4162543' 00:16:34.411 killing process with pid 4162543 00:16:34.411 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 4162543 00:16:34.411 [2024-05-15 20:08:26.704993] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:34.411 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 4162543 00:16:34.411 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:34.411 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:34.411 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:34.411 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:34.411 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:34.411 20:08:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.411 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.411 20:08:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.958 20:08:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:36.958 00:16:36.958 real 0m23.055s 00:16:36.958 user 0m55.294s 00:16:36.958 sys 0m7.663s 00:16:36.958 20:08:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:36.958 20:08:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:36.958 ************************************ 00:16:36.958 END TEST nvmf_ns_masking 00:16:36.958 ************************************ 00:16:36.958 20:08:28 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:16:36.958 20:08:28 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:36.958 20:08:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:36.958 20:08:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:36.958 20:08:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:36.958 ************************************ 00:16:36.958 START TEST nvmf_nvme_cli 00:16:36.958 ************************************ 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:36.958 * Looking for test storage... 00:16:36.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:36.958 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:36.959 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:36.959 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.959 20:08:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.959 20:08:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.959 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:36.959 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:36.959 20:08:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:36.959 20:08:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:45.099 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.099 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:45.099 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:45.099 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:45.099 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:45.099 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:45.100 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:45.100 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:45.100 Found net devices under 0000:31:00.0: cvl_0_0 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:45.100 Found net devices under 0000:31:00.1: cvl_0_1 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:45.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:16:45.100 00:16:45.100 --- 10.0.0.2 ping statistics --- 00:16:45.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.100 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:45.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:16:45.100 00:16:45.100 --- 10.0.0.1 ping statistics --- 00:16:45.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.100 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=4169861 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 4169861 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 4169861 ']' 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:45.100 20:08:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:45.100 [2024-05-15 20:08:37.490888] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:16:45.100 [2024-05-15 20:08:37.490939] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.101 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.101 [2024-05-15 20:08:37.585084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.362 [2024-05-15 20:08:37.682344] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.362 [2024-05-15 20:08:37.682406] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.362 [2024-05-15 20:08:37.682414] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.362 [2024-05-15 20:08:37.682421] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.362 [2024-05-15 20:08:37.682428] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.362 [2024-05-15 20:08:37.682564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.362 [2024-05-15 20:08:37.682693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.362 [2024-05-15 20:08:37.682861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.362 [2024-05-15 20:08:37.682862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.934 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:45.934 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:16:45.934 20:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:45.934 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:45.934 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:45.934 20:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.934 20:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:45.934 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.934 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:45.934 [2024-05-15 20:08:38.418083] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.934 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.934 20:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:45.934 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.934 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.196 Malloc0 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.196 Malloc1 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.196 [2024-05-15 20:08:38.507627] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:46.196 [2024-05-15 20:08:38.507869] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:16:46.196 00:16:46.196 Discovery Log Number of Records 2, Generation counter 2 00:16:46.196 =====Discovery Log Entry 0====== 00:16:46.196 trtype: tcp 00:16:46.196 adrfam: ipv4 00:16:46.196 subtype: current discovery subsystem 00:16:46.196 treq: not required 00:16:46.196 portid: 0 00:16:46.196 trsvcid: 4420 00:16:46.196 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:46.196 traddr: 10.0.0.2 00:16:46.196 eflags: explicit discovery connections, duplicate discovery information 00:16:46.196 sectype: none 00:16:46.196 =====Discovery Log Entry 1====== 00:16:46.196 trtype: tcp 00:16:46.196 adrfam: ipv4 00:16:46.196 subtype: nvme subsystem 00:16:46.196 treq: not required 00:16:46.196 portid: 0 00:16:46.196 trsvcid: 4420 00:16:46.196 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:46.196 traddr: 10.0.0.2 00:16:46.196 eflags: none 00:16:46.196 sectype: none 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:46.196 20:08:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:48.110 20:08:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:48.110 20:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:16:48.110 20:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:48.110 20:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:16:48.110 20:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:16:48.110 20:08:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:50.025 /dev/nvme0n1 ]] 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:50.025 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:50.287 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:50.287 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:50.287 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:50.287 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:50.287 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:50.287 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:50.287 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:50.287 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:50.287 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:50.287 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:50.287 20:08:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:50.287 20:08:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:50.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:50.564 rmmod nvme_tcp 00:16:50.564 rmmod nvme_fabrics 00:16:50.564 rmmod nvme_keyring 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 4169861 ']' 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 4169861 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 4169861 ']' 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 4169861 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4169861 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4169861' 00:16:50.564 killing process with pid 4169861 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 4169861 00:16:50.564 [2024-05-15 20:08:42.988642] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:50.564 20:08:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 4169861 00:16:50.824 20:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:50.825 20:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:50.825 20:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:50.825 20:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:50.825 20:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:50.825 20:08:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.825 20:08:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.825 20:08:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.740 20:08:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:52.740 00:16:52.740 real 0m16.203s 00:16:52.740 user 0m24.121s 00:16:52.740 sys 0m6.893s 00:16:52.740 20:08:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:52.740 20:08:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:52.740 ************************************ 00:16:52.740 END TEST nvmf_nvme_cli 00:16:52.740 ************************************ 00:16:53.001 20:08:45 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:16:53.001 20:08:45 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:53.001 20:08:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:53.001 20:08:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:53.001 20:08:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:53.001 ************************************ 00:16:53.001 START TEST nvmf_host_management 00:16:53.001 ************************************ 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:53.001 * Looking for test storage... 00:16:53.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:53.001 20:08:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:01.147 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:01.147 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.147 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:01.148 Found net devices under 0000:31:00.0: cvl_0_0 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:01.148 Found net devices under 0000:31:00.1: cvl_0_1 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:01.148 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.409 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.409 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.409 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:01.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:17:01.409 00:17:01.409 --- 10.0.0.2 ping statistics --- 00:17:01.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.409 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:17:01.409 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:17:01.410 00:17:01.410 --- 10.0.0.1 ping statistics --- 00:17:01.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.410 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=4175744 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 4175744 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 4175744 ']' 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:01.410 20:08:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:01.410 [2024-05-15 20:08:53.845100] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:17:01.410 [2024-05-15 20:08:53.845167] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.410 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.671 [2024-05-15 20:08:53.923903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:01.671 [2024-05-15 20:08:53.997189] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.671 [2024-05-15 20:08:53.997229] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.671 [2024-05-15 20:08:53.997237] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.671 [2024-05-15 20:08:53.997244] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.671 [2024-05-15 20:08:53.997249] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.671 [2024-05-15 20:08:53.997418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.671 [2024-05-15 20:08:53.997691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:01.671 [2024-05-15 20:08:53.997847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:01.671 [2024-05-15 20:08:53.997848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.242 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:02.242 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:17:02.242 20:08:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.242 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.242 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.503 [2024-05-15 20:08:54.773242] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.503 Malloc0 00:17:02.503 [2024-05-15 20:08:54.836468] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:02.503 [2024-05-15 20:08:54.836691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4175839 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4175839 /var/tmp/bdevperf.sock 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 4175839 ']' 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:02.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:02.503 { 00:17:02.503 "params": { 00:17:02.503 "name": "Nvme$subsystem", 00:17:02.503 "trtype": "$TEST_TRANSPORT", 00:17:02.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.503 "adrfam": "ipv4", 00:17:02.503 "trsvcid": "$NVMF_PORT", 00:17:02.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.503 "hdgst": ${hdgst:-false}, 00:17:02.503 "ddgst": ${ddgst:-false} 00:17:02.503 }, 00:17:02.503 "method": "bdev_nvme_attach_controller" 00:17:02.503 } 00:17:02.503 EOF 00:17:02.503 )") 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:02.503 20:08:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:02.503 "params": { 00:17:02.503 "name": "Nvme0", 00:17:02.503 "trtype": "tcp", 00:17:02.503 "traddr": "10.0.0.2", 00:17:02.503 "adrfam": "ipv4", 00:17:02.503 "trsvcid": "4420", 00:17:02.503 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:02.503 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:02.503 "hdgst": false, 00:17:02.503 "ddgst": false 00:17:02.503 }, 00:17:02.503 "method": "bdev_nvme_attach_controller" 00:17:02.503 }' 00:17:02.503 [2024-05-15 20:08:54.935991] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:17:02.503 [2024-05-15 20:08:54.936043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4175839 ] 00:17:02.503 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.765 [2024-05-15 20:08:55.006814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.765 [2024-05-15 20:08:55.071702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.027 Running I/O for 10 seconds... 00:17:03.294 20:08:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:03.294 20:08:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:17:03.294 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:03.294 20:08:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.294 20:08:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:03.294 20:08:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.294 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:03.294 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:03.295 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:03.295 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:03.295 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:03.295 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:03.295 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:03.295 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:03.295 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:03.295 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:03.295 20:08:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.295 20:08:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:03.559 20:08:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.559 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:17:03.559 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:17:03.559 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:03.559 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:03.559 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:03.559 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:03.559 20:08:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.559 20:08:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:03.559 [2024-05-15 20:08:55.832226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.559 [2024-05-15 20:08:55.832664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.559 [2024-05-15 20:08:55.832673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.832986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.832992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.560 [2024-05-15 20:08:55.833321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.560 [2024-05-15 20:08:55.833330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf45b0 is same with the state(5) to be set 00:17:03.560 [2024-05-15 20:08:55.833370] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaf45b0 was disconnected and freed. reset controller. 00:17:03.560 [2024-05-15 20:08:55.834584] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:03.560 task offset: 81152 on job bdev=Nvme0n1 fails 00:17:03.560 00:17:03.561 Latency(us) 00:17:03.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.561 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:03.561 Job: Nvme0n1 ended in about 0.49 seconds with error 00:17:03.561 Verification LBA range: start 0x0 length 0x400 00:17:03.561 Nvme0n1 : 0.49 1169.67 73.10 129.96 0.00 47981.30 1761.28 45219.84 00:17:03.561 =================================================================================================================== 00:17:03.561 Total : 1169.67 73.10 129.96 0.00 47981.30 1761.28 45219.84 00:17:03.561 [2024-05-15 20:08:55.836582] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:03.561 [2024-05-15 20:08:55.836605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c3080 (9): Bad file descriptor 00:17:03.561 20:08:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.561 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:03.561 20:08:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.561 20:08:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:03.561 [2024-05-15 20:08:55.839639] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:17:03.561 [2024-05-15 20:08:55.839745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:03.561 [2024-05-15 20:08:55.839767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.561 [2024-05-15 20:08:55.839783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:17:03.561 [2024-05-15 20:08:55.839790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:17:03.561 [2024-05-15 20:08:55.839797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:17:03.561 [2024-05-15 20:08:55.839804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6c3080 00:17:03.561 [2024-05-15 20:08:55.839823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c3080 (9): Bad file descriptor 00:17:03.561 [2024-05-15 20:08:55.839834] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:03.561 [2024-05-15 20:08:55.839841] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:03.561 [2024-05-15 20:08:55.839848] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:03.561 [2024-05-15 20:08:55.839860] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.561 20:08:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.561 20:08:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:04.505 20:08:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4175839 00:17:04.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4175839) - No such process 00:17:04.505 20:08:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:17:04.505 20:08:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:04.505 20:08:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:04.505 20:08:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:04.505 20:08:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:04.505 20:08:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:04.505 20:08:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:04.505 20:08:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:04.505 { 00:17:04.505 "params": { 00:17:04.505 "name": "Nvme$subsystem", 00:17:04.505 "trtype": "$TEST_TRANSPORT", 00:17:04.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:04.505 "adrfam": "ipv4", 00:17:04.505 "trsvcid": "$NVMF_PORT", 00:17:04.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:04.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:04.505 "hdgst": ${hdgst:-false}, 00:17:04.505 "ddgst": ${ddgst:-false} 00:17:04.505 }, 00:17:04.505 "method": "bdev_nvme_attach_controller" 00:17:04.505 } 00:17:04.505 EOF 00:17:04.505 )") 00:17:04.505 20:08:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:04.505 20:08:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:04.505 20:08:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:04.505 20:08:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:04.505 "params": { 00:17:04.505 "name": "Nvme0", 00:17:04.505 "trtype": "tcp", 00:17:04.505 "traddr": "10.0.0.2", 00:17:04.505 "adrfam": "ipv4", 00:17:04.505 "trsvcid": "4420", 00:17:04.505 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:04.505 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:04.505 "hdgst": false, 00:17:04.505 "ddgst": false 00:17:04.505 }, 00:17:04.505 "method": "bdev_nvme_attach_controller" 00:17:04.505 }' 00:17:04.505 [2024-05-15 20:08:56.904193] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:17:04.505 [2024-05-15 20:08:56.904254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4176287 ] 00:17:04.505 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.505 [2024-05-15 20:08:56.985748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.767 [2024-05-15 20:08:57.049952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.028 Running I/O for 1 seconds... 00:17:05.971 00:17:05.971 Latency(us) 00:17:05.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.971 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:05.971 Verification LBA range: start 0x0 length 0x400 00:17:05.971 Nvme0n1 : 1.04 1231.60 76.97 0.00 0.00 51067.04 5652.48 47404.37 00:17:05.971 =================================================================================================================== 00:17:05.971 Total : 1231.60 76.97 0.00 0.00 51067.04 5652.48 47404.37 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:06.232 rmmod nvme_tcp 00:17:06.232 rmmod nvme_fabrics 00:17:06.232 rmmod nvme_keyring 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 4175744 ']' 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 4175744 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 4175744 ']' 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 4175744 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4175744 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4175744' 00:17:06.232 killing process with pid 4175744 00:17:06.232 20:08:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 4175744 00:17:06.232 [2024-05-15 20:08:58.666710] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:06.233 20:08:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 4175744 00:17:06.494 [2024-05-15 20:08:58.784765] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:06.494 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:06.494 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:06.494 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:06.494 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.494 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.494 20:08:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.494 20:08:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.494 20:08:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.410 20:09:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:08.410 20:09:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:08.410 00:17:08.410 real 0m15.582s 00:17:08.410 user 0m24.351s 00:17:08.410 sys 0m7.156s 00:17:08.410 20:09:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:08.410 20:09:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:08.410 ************************************ 00:17:08.410 END TEST nvmf_host_management 00:17:08.410 ************************************ 00:17:08.672 20:09:00 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:08.672 20:09:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:08.672 20:09:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:08.672 20:09:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:08.672 ************************************ 00:17:08.672 START TEST nvmf_lvol 00:17:08.672 ************************************ 00:17:08.672 20:09:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:08.672 * Looking for test storage... 00:17:08.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.672 20:09:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.672 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.673 20:09:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:16.815 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.815 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:16.816 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:16.816 Found net devices under 0000:31:00.0: cvl_0_0 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:16.816 Found net devices under 0000:31:00.1: cvl_0_1 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.816 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:17.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:17:17.078 00:17:17.078 --- 10.0.0.2 ping statistics --- 00:17:17.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.078 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:17:17.078 00:17:17.078 --- 10.0.0.1 ping statistics --- 00:17:17.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.078 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=4181947 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 4181947 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 4181947 ']' 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:17.078 20:09:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:17.078 [2024-05-15 20:09:09.542108] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:17:17.078 [2024-05-15 20:09:09.542173] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.369 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.369 [2024-05-15 20:09:09.637350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:17.369 [2024-05-15 20:09:09.733021] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.369 [2024-05-15 20:09:09.733081] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.369 [2024-05-15 20:09:09.733089] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.369 [2024-05-15 20:09:09.733096] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.369 [2024-05-15 20:09:09.733102] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.369 [2024-05-15 20:09:09.733256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.369 [2024-05-15 20:09:09.733412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.369 [2024-05-15 20:09:09.733657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.986 20:09:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:17.986 20:09:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:17:17.986 20:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:17.986 20:09:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.986 20:09:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:17.986 20:09:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.986 20:09:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:18.246 [2024-05-15 20:09:10.656822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.246 20:09:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:18.508 20:09:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:18.508 20:09:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:18.768 20:09:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:18.768 20:09:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:19.029 20:09:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:19.288 20:09:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e6a42cd5-7854-44c1-8b24-dd3622ed6e91 00:17:19.288 20:09:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e6a42cd5-7854-44c1-8b24-dd3622ed6e91 lvol 20 00:17:19.548 20:09:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cdb5ac12-c250-42f5-a7cf-b1a8a51677ac 00:17:19.548 20:09:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:19.548 20:09:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cdb5ac12-c250-42f5-a7cf-b1a8a51677ac 00:17:19.808 20:09:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:20.069 [2024-05-15 20:09:12.455149] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:20.069 [2024-05-15 20:09:12.455404] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.069 20:09:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:20.330 20:09:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4182509 00:17:20.330 20:09:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:20.330 20:09:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:20.330 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.272 20:09:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cdb5ac12-c250-42f5-a7cf-b1a8a51677ac MY_SNAPSHOT 00:17:21.533 20:09:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3d3bedaf-776c-4ba1-93d0-0aa0312cc9a7 00:17:21.533 20:09:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cdb5ac12-c250-42f5-a7cf-b1a8a51677ac 30 00:17:21.794 20:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3d3bedaf-776c-4ba1-93d0-0aa0312cc9a7 MY_CLONE 00:17:22.055 20:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ebefd854-b024-4c80-be43-d8e3ab070699 00:17:22.055 20:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ebefd854-b024-4c80-be43-d8e3ab070699 00:17:22.316 20:09:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4182509 00:17:32.319 Initializing NVMe Controllers 00:17:32.319 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:32.319 Controller IO queue size 128, less than required. 00:17:32.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:32.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:32.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:32.319 Initialization complete. Launching workers. 00:17:32.319 ======================================================== 00:17:32.319 Latency(us) 00:17:32.319 Device Information : IOPS MiB/s Average min max 00:17:32.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12288.90 48.00 10418.11 1523.14 58463.16 00:17:32.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12391.30 48.40 10331.55 3842.33 56702.27 00:17:32.319 ======================================================== 00:17:32.319 Total : 24680.20 96.41 10374.65 1523.14 58463.16 00:17:32.319 00:17:32.319 20:09:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cdb5ac12-c250-42f5-a7cf-b1a8a51677ac 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e6a42cd5-7854-44c1-8b24-dd3622ed6e91 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:32.319 rmmod nvme_tcp 00:17:32.319 rmmod nvme_fabrics 00:17:32.319 rmmod nvme_keyring 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 4181947 ']' 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 4181947 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 4181947 ']' 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 4181947 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4181947 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4181947' 00:17:32.319 killing process with pid 4181947 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 4181947 00:17:32.319 [2024-05-15 20:09:23.739474] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 4181947 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.319 20:09:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.706 20:09:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:33.706 00:17:33.706 real 0m24.993s 00:17:33.706 user 1m6.544s 00:17:33.706 sys 0m8.783s 00:17:33.706 20:09:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:33.706 20:09:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:33.706 ************************************ 00:17:33.706 END TEST nvmf_lvol 00:17:33.706 ************************************ 00:17:33.706 20:09:26 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:33.707 20:09:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:33.707 20:09:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:33.707 20:09:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:33.707 ************************************ 00:17:33.707 START TEST nvmf_lvs_grow 00:17:33.707 ************************************ 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:33.707 * Looking for test storage... 00:17:33.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:33.707 20:09:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:41.854 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:41.854 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.854 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:41.855 Found net devices under 0000:31:00.0: cvl_0_0 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:41.855 Found net devices under 0000:31:00.1: cvl_0_1 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.855 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:42.116 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:42.116 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:42.116 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:42.116 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:42.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.912 ms 00:17:42.378 00:17:42.378 --- 10.0.0.2 ping statistics --- 00:17:42.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.378 rtt min/avg/max/mdev = 0.912/0.912/0.912/0.000 ms 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:42.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.477 ms 00:17:42.378 00:17:42.378 --- 10.0.0.1 ping statistics --- 00:17:42.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.378 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=4189496 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 4189496 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 4189496 ']' 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:42.378 20:09:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:42.378 [2024-05-15 20:09:34.759033] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:17:42.378 [2024-05-15 20:09:34.759093] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.378 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.378 [2024-05-15 20:09:34.855191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.639 [2024-05-15 20:09:34.949158] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.639 [2024-05-15 20:09:34.949218] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.639 [2024-05-15 20:09:34.949226] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.639 [2024-05-15 20:09:34.949233] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.639 [2024-05-15 20:09:34.949239] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.640 [2024-05-15 20:09:34.949264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.213 20:09:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:43.213 20:09:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:17:43.213 20:09:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:43.213 20:09:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:43.213 20:09:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:43.213 20:09:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.213 20:09:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:43.475 [2024-05-15 20:09:35.853406] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.475 20:09:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:43.475 20:09:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:43.475 20:09:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:43.475 20:09:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:43.475 ************************************ 00:17:43.475 START TEST lvs_grow_clean 00:17:43.475 ************************************ 00:17:43.475 20:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:17:43.475 20:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:43.475 20:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:43.475 20:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:43.475 20:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:43.475 20:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:43.475 20:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:43.475 20:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:43.475 20:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:43.475 20:09:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:43.736 20:09:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:43.736 20:09:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:43.997 20:09:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=acdfb2bf-238c-4a89-8367-168e5e59cc2e 00:17:43.997 20:09:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdfb2bf-238c-4a89-8367-168e5e59cc2e 00:17:43.997 20:09:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:44.258 20:09:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:44.258 20:09:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:44.258 20:09:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u acdfb2bf-238c-4a89-8367-168e5e59cc2e lvol 150 00:17:44.518 20:09:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=aa812810-5f7d-417b-a5cb-1fb759984a4b 00:17:44.518 20:09:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:44.518 20:09:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:44.518 [2024-05-15 20:09:37.006035] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:44.518 [2024-05-15 20:09:37.006102] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:44.518 true 00:17:44.778 20:09:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdfb2bf-238c-4a89-8367-168e5e59cc2e 00:17:44.778 20:09:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:44.778 20:09:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:44.778 20:09:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:45.039 20:09:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aa812810-5f7d-417b-a5cb-1fb759984a4b 00:17:45.300 20:09:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:45.561 [2024-05-15 20:09:37.848323] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:45.561 [2024-05-15 20:09:37.848643] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.561 20:09:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:45.822 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:45.822 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4190210 00:17:45.822 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:45.822 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4190210 /var/tmp/bdevperf.sock 00:17:45.822 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 4190210 ']' 00:17:45.822 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:45.822 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:45.822 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:45.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:45.822 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:45.822 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:45.822 [2024-05-15 20:09:38.114345] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:17:45.822 [2024-05-15 20:09:38.114410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4190210 ] 00:17:45.822 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.822 [2024-05-15 20:09:38.184932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.822 [2024-05-15 20:09:38.257285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.082 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:46.082 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:17:46.082 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:46.343 Nvme0n1 00:17:46.343 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:46.604 [ 00:17:46.604 { 00:17:46.604 "name": "Nvme0n1", 00:17:46.604 "aliases": [ 00:17:46.604 "aa812810-5f7d-417b-a5cb-1fb759984a4b" 00:17:46.604 ], 00:17:46.604 "product_name": "NVMe disk", 00:17:46.604 "block_size": 4096, 00:17:46.604 "num_blocks": 38912, 00:17:46.604 "uuid": "aa812810-5f7d-417b-a5cb-1fb759984a4b", 00:17:46.604 "assigned_rate_limits": { 00:17:46.604 "rw_ios_per_sec": 0, 00:17:46.604 "rw_mbytes_per_sec": 0, 00:17:46.604 "r_mbytes_per_sec": 0, 00:17:46.604 "w_mbytes_per_sec": 0 00:17:46.604 }, 00:17:46.604 "claimed": false, 00:17:46.604 "zoned": false, 00:17:46.604 "supported_io_types": { 00:17:46.604 "read": true, 00:17:46.604 "write": true, 00:17:46.604 "unmap": true, 00:17:46.604 "write_zeroes": true, 00:17:46.604 "flush": true, 00:17:46.604 "reset": true, 00:17:46.604 "compare": true, 00:17:46.604 "compare_and_write": true, 00:17:46.604 "abort": true, 00:17:46.604 "nvme_admin": true, 00:17:46.604 "nvme_io": true 00:17:46.604 }, 00:17:46.604 "memory_domains": [ 00:17:46.604 { 00:17:46.604 "dma_device_id": "system", 00:17:46.604 "dma_device_type": 1 00:17:46.604 } 00:17:46.604 ], 00:17:46.604 "driver_specific": { 00:17:46.604 "nvme": [ 00:17:46.604 { 00:17:46.604 "trid": { 00:17:46.604 "trtype": "TCP", 00:17:46.604 "adrfam": "IPv4", 00:17:46.604 "traddr": "10.0.0.2", 00:17:46.604 "trsvcid": "4420", 00:17:46.604 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:46.604 }, 00:17:46.604 "ctrlr_data": { 00:17:46.604 "cntlid": 1, 00:17:46.604 "vendor_id": "0x8086", 00:17:46.604 "model_number": "SPDK bdev Controller", 00:17:46.604 "serial_number": "SPDK0", 00:17:46.604 "firmware_revision": "24.05", 00:17:46.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:46.604 "oacs": { 00:17:46.604 "security": 0, 00:17:46.604 "format": 0, 00:17:46.604 "firmware": 0, 00:17:46.604 "ns_manage": 0 00:17:46.604 }, 00:17:46.604 "multi_ctrlr": true, 00:17:46.604 "ana_reporting": false 00:17:46.604 }, 00:17:46.604 "vs": { 00:17:46.604 "nvme_version": "1.3" 00:17:46.604 }, 00:17:46.604 "ns_data": { 00:17:46.604 "id": 1, 00:17:46.604 "can_share": true 00:17:46.604 } 00:17:46.604 } 00:17:46.604 ], 00:17:46.604 "mp_policy": "active_passive" 00:17:46.604 } 00:17:46.604 } 00:17:46.604 ] 00:17:46.604 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4190275 00:17:46.604 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:46.604 20:09:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:46.604 Running I/O for 10 seconds... 00:17:47.581 Latency(us) 00:17:47.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.581 Nvme0n1 : 1.00 18250.00 71.29 0.00 0.00 0.00 0.00 0.00 00:17:47.581 =================================================================================================================== 00:17:47.581 Total : 18250.00 71.29 0.00 0.00 0.00 0.00 0.00 00:17:47.581 00:17:48.523 20:09:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u acdfb2bf-238c-4a89-8367-168e5e59cc2e 00:17:48.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.784 Nvme0n1 : 2.00 18341.00 71.64 0.00 0.00 0.00 0.00 0.00 00:17:48.784 =================================================================================================================== 00:17:48.784 Total : 18341.00 71.64 0.00 0.00 0.00 0.00 0.00 00:17:48.784 00:17:48.784 true 00:17:48.784 20:09:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdfb2bf-238c-4a89-8367-168e5e59cc2e 00:17:48.784 20:09:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:49.045 20:09:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:49.045 20:09:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:49.045 20:09:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4190275 00:17:49.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.614 Nvme0n1 : 3.00 18371.33 71.76 0.00 0.00 0.00 0.00 0.00 00:17:49.614 =================================================================================================================== 00:17:49.615 Total : 18371.33 71.76 0.00 0.00 0.00 0.00 0.00 00:17:49.615 00:17:50.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.557 Nvme0n1 : 4.00 18386.50 71.82 0.00 0.00 0.00 0.00 0.00 00:17:50.557 =================================================================================================================== 00:17:50.557 Total : 18386.50 71.82 0.00 0.00 0.00 0.00 0.00 00:17:50.557 00:17:51.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.949 Nvme0n1 : 5.00 18421.00 71.96 0.00 0.00 0.00 0.00 0.00 00:17:51.949 =================================================================================================================== 00:17:51.949 Total : 18421.00 71.96 0.00 0.00 0.00 0.00 0.00 00:17:51.949 00:17:52.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.892 Nvme0n1 : 6.00 18444.17 72.05 0.00 0.00 0.00 0.00 0.00 00:17:52.892 =================================================================================================================== 00:17:52.892 Total : 18444.17 72.05 0.00 0.00 0.00 0.00 0.00 00:17:52.892 00:17:53.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.835 Nvme0n1 : 7.00 18460.71 72.11 0.00 0.00 0.00 0.00 0.00 00:17:53.835 =================================================================================================================== 00:17:53.835 Total : 18460.71 72.11 0.00 0.00 0.00 0.00 0.00 00:17:53.835 00:17:54.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.777 Nvme0n1 : 8.00 18473.00 72.16 0.00 0.00 0.00 0.00 0.00 00:17:54.777 =================================================================================================================== 00:17:54.777 Total : 18473.00 72.16 0.00 0.00 0.00 0.00 0.00 00:17:54.777 00:17:55.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:55.719 Nvme0n1 : 9.00 18482.67 72.20 0.00 0.00 0.00 0.00 0.00 00:17:55.719 =================================================================================================================== 00:17:55.719 Total : 18482.67 72.20 0.00 0.00 0.00 0.00 0.00 00:17:55.719 00:17:56.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.661 Nvme0n1 : 10.00 18490.40 72.23 0.00 0.00 0.00 0.00 0.00 00:17:56.661 =================================================================================================================== 00:17:56.661 Total : 18490.40 72.23 0.00 0.00 0.00 0.00 0.00 00:17:56.661 00:17:56.661 00:17:56.661 Latency(us) 00:17:56.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.661 Nvme0n1 : 10.01 18490.36 72.23 0.00 0.00 6917.26 3345.07 11086.51 00:17:56.661 =================================================================================================================== 00:17:56.661 Total : 18490.36 72.23 0.00 0.00 6917.26 3345.07 11086.51 00:17:56.661 0 00:17:56.661 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4190210 00:17:56.661 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 4190210 ']' 00:17:56.661 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 4190210 00:17:56.661 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:56.661 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:56.661 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4190210 00:17:56.661 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:56.661 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:56.661 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4190210' 00:17:56.661 killing process with pid 4190210 00:17:56.661 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 4190210 00:17:56.661 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.661 00:17:56.661 Latency(us) 00:17:56.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.661 =================================================================================================================== 00:17:56.661 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:56.661 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 4190210 00:17:56.922 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:57.182 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:57.443 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdfb2bf-238c-4a89-8367-168e5e59cc2e 00:17:57.443 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:57.443 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:57.443 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:57.443 20:09:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:57.704 [2024-05-15 20:09:50.125795] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:57.704 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdfb2bf-238c-4a89-8367-168e5e59cc2e 00:17:57.704 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:57.704 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdfb2bf-238c-4a89-8367-168e5e59cc2e 00:17:57.704 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.704 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:57.704 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.704 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:57.704 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.704 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:57.704 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.704 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:57.704 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdfb2bf-238c-4a89-8367-168e5e59cc2e 00:17:57.964 request: 00:17:57.964 { 00:17:57.964 "uuid": "acdfb2bf-238c-4a89-8367-168e5e59cc2e", 00:17:57.964 "method": "bdev_lvol_get_lvstores", 00:17:57.964 "req_id": 1 00:17:57.964 } 00:17:57.964 Got JSON-RPC error response 00:17:57.964 response: 00:17:57.964 { 00:17:57.964 "code": -19, 00:17:57.964 "message": "No such device" 00:17:57.964 } 00:17:57.964 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:57.964 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:57.964 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:57.964 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:57.964 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:58.225 aio_bdev 00:17:58.225 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aa812810-5f7d-417b-a5cb-1fb759984a4b 00:17:58.225 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=aa812810-5f7d-417b-a5cb-1fb759984a4b 00:17:58.225 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:58.225 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:58.225 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:58.225 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:58.225 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:58.486 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aa812810-5f7d-417b-a5cb-1fb759984a4b -t 2000 00:17:58.486 [ 00:17:58.486 { 00:17:58.486 "name": "aa812810-5f7d-417b-a5cb-1fb759984a4b", 00:17:58.486 "aliases": [ 00:17:58.486 "lvs/lvol" 00:17:58.486 ], 00:17:58.486 "product_name": "Logical Volume", 00:17:58.486 "block_size": 4096, 00:17:58.486 "num_blocks": 38912, 00:17:58.486 "uuid": "aa812810-5f7d-417b-a5cb-1fb759984a4b", 00:17:58.486 "assigned_rate_limits": { 00:17:58.486 "rw_ios_per_sec": 0, 00:17:58.486 "rw_mbytes_per_sec": 0, 00:17:58.486 "r_mbytes_per_sec": 0, 00:17:58.486 "w_mbytes_per_sec": 0 00:17:58.486 }, 00:17:58.486 "claimed": false, 00:17:58.486 "zoned": false, 00:17:58.486 "supported_io_types": { 00:17:58.486 "read": true, 00:17:58.486 "write": true, 00:17:58.486 "unmap": true, 00:17:58.486 "write_zeroes": true, 00:17:58.486 "flush": false, 00:17:58.486 "reset": true, 00:17:58.486 "compare": false, 00:17:58.486 "compare_and_write": false, 00:17:58.486 "abort": false, 00:17:58.486 "nvme_admin": false, 00:17:58.486 "nvme_io": false 00:17:58.486 }, 00:17:58.486 "driver_specific": { 00:17:58.486 "lvol": { 00:17:58.486 "lvol_store_uuid": "acdfb2bf-238c-4a89-8367-168e5e59cc2e", 00:17:58.486 "base_bdev": "aio_bdev", 00:17:58.486 "thin_provision": false, 00:17:58.486 "num_allocated_clusters": 38, 00:17:58.486 "snapshot": false, 00:17:58.486 "clone": false, 00:17:58.486 "esnap_clone": false 00:17:58.486 } 00:17:58.486 } 00:17:58.486 } 00:17:58.486 ] 00:17:58.486 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:58.747 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:58.748 20:09:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdfb2bf-238c-4a89-8367-168e5e59cc2e 00:17:58.748 20:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:58.748 20:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acdfb2bf-238c-4a89-8367-168e5e59cc2e 00:17:58.748 20:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:59.009 20:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:59.009 20:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aa812810-5f7d-417b-a5cb-1fb759984a4b 00:17:59.270 20:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u acdfb2bf-238c-4a89-8367-168e5e59cc2e 00:17:59.531 20:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:59.531 20:09:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:59.531 00:17:59.531 real 0m16.089s 00:17:59.531 user 0m15.763s 00:17:59.531 sys 0m1.388s 00:17:59.531 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:59.531 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:59.531 ************************************ 00:17:59.531 END TEST lvs_grow_clean 00:17:59.531 ************************************ 00:17:59.799 20:09:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:59.799 20:09:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:59.799 20:09:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:59.799 20:09:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:59.799 ************************************ 00:17:59.799 START TEST lvs_grow_dirty 00:17:59.799 ************************************ 00:17:59.799 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:59.799 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:59.799 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:59.799 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:59.799 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:59.799 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:59.799 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:59.799 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:59.799 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:59.799 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:00.100 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:00.100 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:00.100 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9d14e92f-4648-4011-b735-8b3b1f556ece 00:18:00.100 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d14e92f-4648-4011-b735-8b3b1f556ece 00:18:00.100 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:00.387 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:00.387 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:00.387 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9d14e92f-4648-4011-b735-8b3b1f556ece lvol 150 00:18:00.648 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a6cb1936-7876-48de-86db-47f62929109d 00:18:00.648 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:00.648 20:09:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:00.648 [2024-05-15 20:09:53.121376] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:00.648 [2024-05-15 20:09:53.121426] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:00.648 true 00:18:00.648 20:09:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:00.648 20:09:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d14e92f-4648-4011-b735-8b3b1f556ece 00:18:00.908 20:09:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:00.908 20:09:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:01.168 20:09:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a6cb1936-7876-48de-86db-47f62929109d 00:18:01.429 20:09:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:01.429 [2024-05-15 20:09:53.927744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.689 20:09:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:01.690 20:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:01.690 20:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4193294 00:18:01.690 20:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:01.690 20:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4193294 /var/tmp/bdevperf.sock 00:18:01.690 20:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 4193294 ']' 00:18:01.690 20:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.690 20:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:01.690 20:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.690 20:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:01.690 20:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:01.690 [2024-05-15 20:09:54.153277] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:18:01.690 [2024-05-15 20:09:54.153331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4193294 ] 00:18:01.690 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.950 [2024-05-15 20:09:54.217211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.950 [2024-05-15 20:09:54.281446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.950 20:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:01.950 20:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:18:01.950 20:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:02.522 Nvme0n1 00:18:02.522 20:09:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:02.784 [ 00:18:02.784 { 00:18:02.784 "name": "Nvme0n1", 00:18:02.784 "aliases": [ 00:18:02.784 "a6cb1936-7876-48de-86db-47f62929109d" 00:18:02.784 ], 00:18:02.784 "product_name": "NVMe disk", 00:18:02.784 "block_size": 4096, 00:18:02.784 "num_blocks": 38912, 00:18:02.784 "uuid": "a6cb1936-7876-48de-86db-47f62929109d", 00:18:02.784 "assigned_rate_limits": { 00:18:02.784 "rw_ios_per_sec": 0, 00:18:02.784 "rw_mbytes_per_sec": 0, 00:18:02.784 "r_mbytes_per_sec": 0, 00:18:02.784 "w_mbytes_per_sec": 0 00:18:02.784 }, 00:18:02.784 "claimed": false, 00:18:02.784 "zoned": false, 00:18:02.784 "supported_io_types": { 00:18:02.784 "read": true, 00:18:02.784 "write": true, 00:18:02.784 "unmap": true, 00:18:02.784 "write_zeroes": true, 00:18:02.784 "flush": true, 00:18:02.784 "reset": true, 00:18:02.784 "compare": true, 00:18:02.784 "compare_and_write": true, 00:18:02.784 "abort": true, 00:18:02.784 "nvme_admin": true, 00:18:02.784 "nvme_io": true 00:18:02.784 }, 00:18:02.784 "memory_domains": [ 00:18:02.784 { 00:18:02.784 "dma_device_id": "system", 00:18:02.784 "dma_device_type": 1 00:18:02.784 } 00:18:02.784 ], 00:18:02.784 "driver_specific": { 00:18:02.784 "nvme": [ 00:18:02.784 { 00:18:02.784 "trid": { 00:18:02.784 "trtype": "TCP", 00:18:02.784 "adrfam": "IPv4", 00:18:02.784 "traddr": "10.0.0.2", 00:18:02.784 "trsvcid": "4420", 00:18:02.784 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:02.784 }, 00:18:02.784 "ctrlr_data": { 00:18:02.784 "cntlid": 1, 00:18:02.784 "vendor_id": "0x8086", 00:18:02.784 "model_number": "SPDK bdev Controller", 00:18:02.784 "serial_number": "SPDK0", 00:18:02.784 "firmware_revision": "24.05", 00:18:02.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:02.784 "oacs": { 00:18:02.784 "security": 0, 00:18:02.784 "format": 0, 00:18:02.784 "firmware": 0, 00:18:02.784 "ns_manage": 0 00:18:02.784 }, 00:18:02.784 "multi_ctrlr": true, 00:18:02.784 "ana_reporting": false 00:18:02.784 }, 00:18:02.784 "vs": { 00:18:02.784 "nvme_version": "1.3" 00:18:02.784 }, 00:18:02.784 "ns_data": { 00:18:02.784 "id": 1, 00:18:02.784 "can_share": true 00:18:02.784 } 00:18:02.784 } 00:18:02.784 ], 00:18:02.784 "mp_policy": "active_passive" 00:18:02.784 } 00:18:02.784 } 00:18:02.784 ] 00:18:02.784 20:09:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4193627 00:18:02.784 20:09:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:02.784 20:09:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:02.784 Running I/O for 10 seconds... 00:18:03.728 Latency(us) 00:18:03.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.728 Nvme0n1 : 1.00 18179.00 71.01 0.00 0.00 0.00 0.00 0.00 00:18:03.728 =================================================================================================================== 00:18:03.728 Total : 18179.00 71.01 0.00 0.00 0.00 0.00 0.00 00:18:03.728 00:18:04.671 20:09:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9d14e92f-4648-4011-b735-8b3b1f556ece 00:18:04.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.671 Nvme0n1 : 2.00 18341.00 71.64 0.00 0.00 0.00 0.00 0.00 00:18:04.671 =================================================================================================================== 00:18:04.671 Total : 18341.00 71.64 0.00 0.00 0.00 0.00 0.00 00:18:04.671 00:18:04.932 true 00:18:04.932 20:09:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d14e92f-4648-4011-b735-8b3b1f556ece 00:18:04.932 20:09:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:05.192 20:09:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:05.192 20:09:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:05.192 20:09:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4193627 00:18:05.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.765 Nvme0n1 : 3.00 18371.00 71.76 0.00 0.00 0.00 0.00 0.00 00:18:05.765 =================================================================================================================== 00:18:05.765 Total : 18371.00 71.76 0.00 0.00 0.00 0.00 0.00 00:18:05.765 00:18:06.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.707 Nvme0n1 : 4.00 18418.00 71.95 0.00 0.00 0.00 0.00 0.00 00:18:06.707 =================================================================================================================== 00:18:06.707 Total : 18418.00 71.95 0.00 0.00 0.00 0.00 0.00 00:18:06.707 00:18:08.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:08.093 Nvme0n1 : 5.00 18433.60 72.01 0.00 0.00 0.00 0.00 0.00 00:18:08.093 =================================================================================================================== 00:18:08.093 Total : 18433.60 72.01 0.00 0.00 0.00 0.00 0.00 00:18:08.093 00:18:08.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:08.666 Nvme0n1 : 6.00 18454.67 72.09 0.00 0.00 0.00 0.00 0.00 00:18:08.666 =================================================================================================================== 00:18:08.666 Total : 18454.67 72.09 0.00 0.00 0.00 0.00 0.00 00:18:08.666 00:18:10.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:10.052 Nvme0n1 : 7.00 18469.71 72.15 0.00 0.00 0.00 0.00 0.00 00:18:10.052 =================================================================================================================== 00:18:10.052 Total : 18469.71 72.15 0.00 0.00 0.00 0.00 0.00 00:18:10.052 00:18:10.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:10.993 Nvme0n1 : 8.00 18481.00 72.19 0.00 0.00 0.00 0.00 0.00 00:18:10.993 =================================================================================================================== 00:18:10.993 Total : 18481.00 72.19 0.00 0.00 0.00 0.00 0.00 00:18:10.993 00:18:11.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.936 Nvme0n1 : 9.00 18489.78 72.23 0.00 0.00 0.00 0.00 0.00 00:18:11.936 =================================================================================================================== 00:18:11.936 Total : 18489.78 72.23 0.00 0.00 0.00 0.00 0.00 00:18:11.936 00:18:12.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.879 Nvme0n1 : 10.00 18503.10 72.28 0.00 0.00 0.00 0.00 0.00 00:18:12.879 =================================================================================================================== 00:18:12.879 Total : 18503.10 72.28 0.00 0.00 0.00 0.00 0.00 00:18:12.879 00:18:12.879 00:18:12.879 Latency(us) 00:18:12.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.879 Nvme0n1 : 10.00 18501.63 72.27 0.00 0.00 6913.10 4450.99 15619.41 00:18:12.879 =================================================================================================================== 00:18:12.879 Total : 18501.63 72.27 0.00 0.00 6913.10 4450.99 15619.41 00:18:12.879 0 00:18:12.879 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4193294 00:18:12.879 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 4193294 ']' 00:18:12.879 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 4193294 00:18:12.879 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:18:12.879 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:12.879 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4193294 00:18:12.879 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:12.879 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:12.879 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4193294' 00:18:12.879 killing process with pid 4193294 00:18:12.879 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 4193294 00:18:12.879 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.879 00:18:12.879 Latency(us) 00:18:12.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.879 =================================================================================================================== 00:18:12.879 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.879 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 4193294 00:18:12.879 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:13.140 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:13.402 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d14e92f-4648-4011-b735-8b3b1f556ece 00:18:13.402 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:13.663 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:13.663 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:13.663 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4189496 00:18:13.663 20:10:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4189496 00:18:13.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4189496 Killed "${NVMF_APP[@]}" "$@" 00:18:13.663 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:13.663 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:13.663 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:13.663 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:13.663 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:13.663 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2194 00:18:13.663 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2194 00:18:13.663 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:13.663 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 2194 ']' 00:18:13.663 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.663 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:13.663 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.663 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:13.663 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:13.663 [2024-05-15 20:10:06.061106] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:18:13.663 [2024-05-15 20:10:06.061159] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.663 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.663 [2024-05-15 20:10:06.150473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.924 [2024-05-15 20:10:06.215020] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.924 [2024-05-15 20:10:06.215056] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.924 [2024-05-15 20:10:06.215064] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.924 [2024-05-15 20:10:06.215071] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.925 [2024-05-15 20:10:06.215076] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.925 [2024-05-15 20:10:06.215093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.496 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:14.496 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:18:14.496 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:14.496 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.496 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:14.496 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.496 20:10:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:14.756 [2024-05-15 20:10:07.140181] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:14.756 [2024-05-15 20:10:07.140266] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:14.756 [2024-05-15 20:10:07.140299] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:14.756 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:14.757 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a6cb1936-7876-48de-86db-47f62929109d 00:18:14.757 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=a6cb1936-7876-48de-86db-47f62929109d 00:18:14.757 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:14.757 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:14.757 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:14.757 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:14.757 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:15.017 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a6cb1936-7876-48de-86db-47f62929109d -t 2000 00:18:15.277 [ 00:18:15.277 { 00:18:15.277 "name": "a6cb1936-7876-48de-86db-47f62929109d", 00:18:15.277 "aliases": [ 00:18:15.277 "lvs/lvol" 00:18:15.277 ], 00:18:15.277 "product_name": "Logical Volume", 00:18:15.277 "block_size": 4096, 00:18:15.277 "num_blocks": 38912, 00:18:15.277 "uuid": "a6cb1936-7876-48de-86db-47f62929109d", 00:18:15.277 "assigned_rate_limits": { 00:18:15.277 "rw_ios_per_sec": 0, 00:18:15.277 "rw_mbytes_per_sec": 0, 00:18:15.277 "r_mbytes_per_sec": 0, 00:18:15.277 "w_mbytes_per_sec": 0 00:18:15.277 }, 00:18:15.277 "claimed": false, 00:18:15.277 "zoned": false, 00:18:15.277 "supported_io_types": { 00:18:15.277 "read": true, 00:18:15.277 "write": true, 00:18:15.277 "unmap": true, 00:18:15.277 "write_zeroes": true, 00:18:15.277 "flush": false, 00:18:15.277 "reset": true, 00:18:15.277 "compare": false, 00:18:15.277 "compare_and_write": false, 00:18:15.277 "abort": false, 00:18:15.277 "nvme_admin": false, 00:18:15.277 "nvme_io": false 00:18:15.277 }, 00:18:15.277 "driver_specific": { 00:18:15.277 "lvol": { 00:18:15.277 "lvol_store_uuid": "9d14e92f-4648-4011-b735-8b3b1f556ece", 00:18:15.277 "base_bdev": "aio_bdev", 00:18:15.277 "thin_provision": false, 00:18:15.278 "num_allocated_clusters": 38, 00:18:15.278 "snapshot": false, 00:18:15.278 "clone": false, 00:18:15.278 "esnap_clone": false 00:18:15.278 } 00:18:15.278 } 00:18:15.278 } 00:18:15.278 ] 00:18:15.278 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:15.278 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d14e92f-4648-4011-b735-8b3b1f556ece 00:18:15.278 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:15.278 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:15.278 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d14e92f-4648-4011-b735-8b3b1f556ece 00:18:15.278 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:15.539 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:15.539 20:10:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:15.801 [2024-05-15 20:10:08.128766] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:15.801 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d14e92f-4648-4011-b735-8b3b1f556ece 00:18:15.801 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:15.801 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d14e92f-4648-4011-b735-8b3b1f556ece 00:18:15.801 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.801 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.801 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.801 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.801 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.801 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:15.801 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.801 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:15.801 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d14e92f-4648-4011-b735-8b3b1f556ece 00:18:16.063 request: 00:18:16.063 { 00:18:16.063 "uuid": "9d14e92f-4648-4011-b735-8b3b1f556ece", 00:18:16.063 "method": "bdev_lvol_get_lvstores", 00:18:16.063 "req_id": 1 00:18:16.063 } 00:18:16.063 Got JSON-RPC error response 00:18:16.063 response: 00:18:16.063 { 00:18:16.063 "code": -19, 00:18:16.063 "message": "No such device" 00:18:16.063 } 00:18:16.063 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:16.063 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:16.063 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:16.063 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:16.063 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:16.325 aio_bdev 00:18:16.325 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a6cb1936-7876-48de-86db-47f62929109d 00:18:16.325 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=a6cb1936-7876-48de-86db-47f62929109d 00:18:16.325 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:16.325 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:16.325 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:16.325 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:16.325 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:16.325 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a6cb1936-7876-48de-86db-47f62929109d -t 2000 00:18:16.586 [ 00:18:16.586 { 00:18:16.586 "name": "a6cb1936-7876-48de-86db-47f62929109d", 00:18:16.586 "aliases": [ 00:18:16.586 "lvs/lvol" 00:18:16.586 ], 00:18:16.586 "product_name": "Logical Volume", 00:18:16.586 "block_size": 4096, 00:18:16.586 "num_blocks": 38912, 00:18:16.586 "uuid": "a6cb1936-7876-48de-86db-47f62929109d", 00:18:16.586 "assigned_rate_limits": { 00:18:16.586 "rw_ios_per_sec": 0, 00:18:16.586 "rw_mbytes_per_sec": 0, 00:18:16.586 "r_mbytes_per_sec": 0, 00:18:16.586 "w_mbytes_per_sec": 0 00:18:16.586 }, 00:18:16.586 "claimed": false, 00:18:16.586 "zoned": false, 00:18:16.586 "supported_io_types": { 00:18:16.586 "read": true, 00:18:16.586 "write": true, 00:18:16.586 "unmap": true, 00:18:16.586 "write_zeroes": true, 00:18:16.586 "flush": false, 00:18:16.586 "reset": true, 00:18:16.586 "compare": false, 00:18:16.586 "compare_and_write": false, 00:18:16.586 "abort": false, 00:18:16.586 "nvme_admin": false, 00:18:16.586 "nvme_io": false 00:18:16.586 }, 00:18:16.586 "driver_specific": { 00:18:16.586 "lvol": { 00:18:16.586 "lvol_store_uuid": "9d14e92f-4648-4011-b735-8b3b1f556ece", 00:18:16.586 "base_bdev": "aio_bdev", 00:18:16.586 "thin_provision": false, 00:18:16.586 "num_allocated_clusters": 38, 00:18:16.586 "snapshot": false, 00:18:16.586 "clone": false, 00:18:16.586 "esnap_clone": false 00:18:16.586 } 00:18:16.586 } 00:18:16.586 } 00:18:16.586 ] 00:18:16.586 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:16.586 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:16.586 20:10:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d14e92f-4648-4011-b735-8b3b1f556ece 00:18:16.847 20:10:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:16.847 20:10:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9d14e92f-4648-4011-b735-8b3b1f556ece 00:18:16.847 20:10:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:17.107 20:10:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:17.107 20:10:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a6cb1936-7876-48de-86db-47f62929109d 00:18:17.107 20:10:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9d14e92f-4648-4011-b735-8b3b1f556ece 00:18:17.367 20:10:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:17.626 20:10:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:17.626 00:18:17.626 real 0m17.895s 00:18:17.626 user 0m46.611s 00:18:17.626 sys 0m2.863s 00:18:17.626 20:10:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:17.626 20:10:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:17.626 ************************************ 00:18:17.626 END TEST lvs_grow_dirty 00:18:17.626 ************************************ 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:17.626 nvmf_trace.0 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.626 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.626 rmmod nvme_tcp 00:18:17.626 rmmod nvme_fabrics 00:18:17.885 rmmod nvme_keyring 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2194 ']' 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2194 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 2194 ']' 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 2194 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2194 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2194' 00:18:17.885 killing process with pid 2194 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 2194 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 2194 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.885 20:10:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.430 20:10:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:20.430 00:18:20.430 real 0m46.391s 00:18:20.430 user 1m9.484s 00:18:20.430 sys 0m11.077s 00:18:20.430 20:10:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:20.430 20:10:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:20.430 ************************************ 00:18:20.430 END TEST nvmf_lvs_grow 00:18:20.430 ************************************ 00:18:20.430 20:10:12 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:20.430 20:10:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:20.430 20:10:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:20.430 20:10:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:20.430 ************************************ 00:18:20.430 START TEST nvmf_bdev_io_wait 00:18:20.430 ************************************ 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:20.430 * Looking for test storage... 00:18:20.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:20.430 20:10:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:28.571 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:28.571 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:28.571 Found net devices under 0000:31:00.0: cvl_0_0 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:28.571 Found net devices under 0000:31:00.1: cvl_0_1 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:28.571 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.572 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.572 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:28.572 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:28.572 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.572 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.572 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.572 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.572 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:28.572 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.572 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.572 20:10:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:28.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:18:28.572 00:18:28.572 --- 10.0.0.2 ping statistics --- 00:18:28.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.572 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:18:28.572 00:18:28.572 --- 10.0.0.1 ping statistics --- 00:18:28.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.572 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=7950 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 7950 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 7950 ']' 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.572 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:28.832 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.832 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:28.832 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:28.832 [2024-05-15 20:10:21.117493] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:18:28.832 [2024-05-15 20:10:21.117546] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.832 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.832 [2024-05-15 20:10:21.207837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:28.832 [2024-05-15 20:10:21.280455] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.832 [2024-05-15 20:10:21.280497] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.832 [2024-05-15 20:10:21.280505] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.832 [2024-05-15 20:10:21.280512] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.832 [2024-05-15 20:10:21.280518] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.832 [2024-05-15 20:10:21.280630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.832 [2024-05-15 20:10:21.280746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.832 [2024-05-15 20:10:21.280904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.832 [2024-05-15 20:10:21.280905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:29.776 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:29.776 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:18:29.776 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:29.776 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.776 20:10:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:29.776 [2024-05-15 20:10:22.084950] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:29.776 Malloc0 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.776 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:29.777 [2024-05-15 20:10:22.138317] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:29.777 [2024-05-15 20:10:22.138550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=8155 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=8156 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=8158 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=8160 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:29.777 { 00:18:29.777 "params": { 00:18:29.777 "name": "Nvme$subsystem", 00:18:29.777 "trtype": "$TEST_TRANSPORT", 00:18:29.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:29.777 "adrfam": "ipv4", 00:18:29.777 "trsvcid": "$NVMF_PORT", 00:18:29.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:29.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:29.777 "hdgst": ${hdgst:-false}, 00:18:29.777 "ddgst": ${ddgst:-false} 00:18:29.777 }, 00:18:29.777 "method": "bdev_nvme_attach_controller" 00:18:29.777 } 00:18:29.777 EOF 00:18:29.777 )") 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:29.777 { 00:18:29.777 "params": { 00:18:29.777 "name": "Nvme$subsystem", 00:18:29.777 "trtype": "$TEST_TRANSPORT", 00:18:29.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:29.777 "adrfam": "ipv4", 00:18:29.777 "trsvcid": "$NVMF_PORT", 00:18:29.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:29.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:29.777 "hdgst": ${hdgst:-false}, 00:18:29.777 "ddgst": ${ddgst:-false} 00:18:29.777 }, 00:18:29.777 "method": "bdev_nvme_attach_controller" 00:18:29.777 } 00:18:29.777 EOF 00:18:29.777 )") 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:29.777 { 00:18:29.777 "params": { 00:18:29.777 "name": "Nvme$subsystem", 00:18:29.777 "trtype": "$TEST_TRANSPORT", 00:18:29.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:29.777 "adrfam": "ipv4", 00:18:29.777 "trsvcid": "$NVMF_PORT", 00:18:29.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:29.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:29.777 "hdgst": ${hdgst:-false}, 00:18:29.777 "ddgst": ${ddgst:-false} 00:18:29.777 }, 00:18:29.777 "method": "bdev_nvme_attach_controller" 00:18:29.777 } 00:18:29.777 EOF 00:18:29.777 )") 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:29.777 { 00:18:29.777 "params": { 00:18:29.777 "name": "Nvme$subsystem", 00:18:29.777 "trtype": "$TEST_TRANSPORT", 00:18:29.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:29.777 "adrfam": "ipv4", 00:18:29.777 "trsvcid": "$NVMF_PORT", 00:18:29.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:29.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:29.777 "hdgst": ${hdgst:-false}, 00:18:29.777 "ddgst": ${ddgst:-false} 00:18:29.777 }, 00:18:29.777 "method": "bdev_nvme_attach_controller" 00:18:29.777 } 00:18:29.777 EOF 00:18:29.777 )") 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 8155 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:29.777 "params": { 00:18:29.777 "name": "Nvme1", 00:18:29.777 "trtype": "tcp", 00:18:29.777 "traddr": "10.0.0.2", 00:18:29.777 "adrfam": "ipv4", 00:18:29.777 "trsvcid": "4420", 00:18:29.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.777 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.777 "hdgst": false, 00:18:29.777 "ddgst": false 00:18:29.777 }, 00:18:29.777 "method": "bdev_nvme_attach_controller" 00:18:29.777 }' 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:29.777 "params": { 00:18:29.777 "name": "Nvme1", 00:18:29.777 "trtype": "tcp", 00:18:29.777 "traddr": "10.0.0.2", 00:18:29.777 "adrfam": "ipv4", 00:18:29.777 "trsvcid": "4420", 00:18:29.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.777 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.777 "hdgst": false, 00:18:29.777 "ddgst": false 00:18:29.777 }, 00:18:29.777 "method": "bdev_nvme_attach_controller" 00:18:29.777 }' 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:29.777 "params": { 00:18:29.777 "name": "Nvme1", 00:18:29.777 "trtype": "tcp", 00:18:29.777 "traddr": "10.0.0.2", 00:18:29.777 "adrfam": "ipv4", 00:18:29.777 "trsvcid": "4420", 00:18:29.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.777 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.777 "hdgst": false, 00:18:29.777 "ddgst": false 00:18:29.777 }, 00:18:29.777 "method": "bdev_nvme_attach_controller" 00:18:29.777 }' 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:29.777 20:10:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:29.777 "params": { 00:18:29.777 "name": "Nvme1", 00:18:29.777 "trtype": "tcp", 00:18:29.777 "traddr": "10.0.0.2", 00:18:29.777 "adrfam": "ipv4", 00:18:29.777 "trsvcid": "4420", 00:18:29.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.777 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.777 "hdgst": false, 00:18:29.777 "ddgst": false 00:18:29.777 }, 00:18:29.777 "method": "bdev_nvme_attach_controller" 00:18:29.777 }' 00:18:29.777 [2024-05-15 20:10:22.188108] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:18:29.777 [2024-05-15 20:10:22.188160] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:29.777 [2024-05-15 20:10:22.189130] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:18:29.777 [2024-05-15 20:10:22.189131] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:18:29.777 [2024-05-15 20:10:22.189178] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 20:10:22.189178] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:29.777 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:29.777 [2024-05-15 20:10:22.190406] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:18:29.777 [2024-05-15 20:10:22.190453] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:29.777 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.039 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.039 [2024-05-15 20:10:22.346604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.039 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.039 [2024-05-15 20:10:22.397602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:30.039 [2024-05-15 20:10:22.409477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.039 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.039 [2024-05-15 20:10:22.459456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.039 [2024-05-15 20:10:22.459721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:30.039 [2024-05-15 20:10:22.499594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.039 [2024-05-15 20:10:22.511724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:30.300 [2024-05-15 20:10:22.549201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:30.300 Running I/O for 1 seconds... 00:18:30.300 Running I/O for 1 seconds... 00:18:30.561 Running I/O for 1 seconds... 00:18:30.561 Running I/O for 1 seconds... 00:18:31.133 00:18:31.133 Latency(us) 00:18:31.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.133 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:31.133 Nvme1n1 : 1.01 9278.92 36.25 0.00 0.00 13708.87 7973.55 24248.32 00:18:31.133 =================================================================================================================== 00:18:31.133 Total : 9278.92 36.25 0.00 0.00 13708.87 7973.55 24248.32 00:18:31.133 00:18:31.133 Latency(us) 00:18:31.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.133 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:31.133 Nvme1n1 : 1.01 12814.63 50.06 0.00 0.00 9949.76 6935.89 21845.33 00:18:31.133 =================================================================================================================== 00:18:31.133 Total : 12814.63 50.06 0.00 0.00 9949.76 6935.89 21845.33 00:18:31.394 20:10:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 8156 00:18:31.394 20:10:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 8158 00:18:31.394 00:18:31.394 Latency(us) 00:18:31.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.394 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:31.394 Nvme1n1 : 1.00 188222.12 735.24 0.00 0.00 676.92 266.24 747.52 00:18:31.394 =================================================================================================================== 00:18:31.394 Total : 188222.12 735.24 0.00 0.00 676.92 266.24 747.52 00:18:31.394 00:18:31.394 Latency(us) 00:18:31.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.394 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:31.394 Nvme1n1 : 1.00 10180.20 39.77 0.00 0.00 12544.82 3932.16 35170.99 00:18:31.394 =================================================================================================================== 00:18:31.394 Total : 10180.20 39.77 0.00 0.00 12544.82 3932.16 35170.99 00:18:31.653 20:10:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 8160 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:31.653 rmmod nvme_tcp 00:18:31.653 rmmod nvme_fabrics 00:18:31.653 rmmod nvme_keyring 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 7950 ']' 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 7950 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 7950 ']' 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 7950 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 7950 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 7950' 00:18:31.653 killing process with pid 7950 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 7950 00:18:31.653 [2024-05-15 20:10:24.136346] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:31.653 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 7950 00:18:31.915 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:31.915 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:31.915 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:31.915 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.915 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:31.915 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.915 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.915 20:10:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.465 20:10:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:34.465 00:18:34.465 real 0m13.813s 00:18:34.465 user 0m19.900s 00:18:34.465 sys 0m7.592s 00:18:34.465 20:10:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:34.465 20:10:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:34.465 ************************************ 00:18:34.465 END TEST nvmf_bdev_io_wait 00:18:34.465 ************************************ 00:18:34.465 20:10:26 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:34.465 20:10:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:34.465 20:10:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:34.465 20:10:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:34.465 ************************************ 00:18:34.465 START TEST nvmf_queue_depth 00:18:34.465 ************************************ 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:34.465 * Looking for test storage... 00:18:34.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:34.465 20:10:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:42.716 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:42.717 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:42.717 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:42.717 Found net devices under 0000:31:00.0: cvl_0_0 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:42.717 Found net devices under 0000:31:00.1: cvl_0_1 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:42.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:18:42.717 00:18:42.717 --- 10.0.0.2 ping statistics --- 00:18:42.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.717 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:18:42.717 00:18:42.717 --- 10.0.0.1 ping statistics --- 00:18:42.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.717 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=13228 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 13228 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 13228 ']' 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.717 20:10:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:42.718 20:10:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:42.718 [2024-05-15 20:10:34.821883] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:18:42.718 [2024-05-15 20:10:34.821931] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.718 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.718 [2024-05-15 20:10:34.887024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.718 [2024-05-15 20:10:34.951736] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.718 [2024-05-15 20:10:34.951771] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.718 [2024-05-15 20:10:34.951779] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.718 [2024-05-15 20:10:34.951785] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.718 [2024-05-15 20:10:34.951791] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.718 [2024-05-15 20:10:34.951809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:42.718 [2024-05-15 20:10:35.076836] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:42.718 Malloc0 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:42.718 [2024-05-15 20:10:35.140789] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:42.718 [2024-05-15 20:10:35.141002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=13296 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 13296 /var/tmp/bdevperf.sock 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 13296 ']' 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:42.718 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:42.718 [2024-05-15 20:10:35.189586] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:18:42.718 [2024-05-15 20:10:35.189634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid13296 ] 00:18:42.979 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.979 [2024-05-15 20:10:35.270598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.979 [2024-05-15 20:10:35.335806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.979 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:42.979 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:42.979 20:10:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:42.979 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.979 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:43.240 NVMe0n1 00:18:43.240 20:10:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.240 20:10:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:43.240 Running I/O for 10 seconds... 00:18:53.249 00:18:53.249 Latency(us) 00:18:53.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.249 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:53.249 Verification LBA range: start 0x0 length 0x4000 00:18:53.249 NVMe0n1 : 10.06 9464.49 36.97 0.00 0.00 107783.51 23374.51 76021.76 00:18:53.249 =================================================================================================================== 00:18:53.249 Total : 9464.49 36.97 0.00 0.00 107783.51 23374.51 76021.76 00:18:53.249 0 00:18:53.249 20:10:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 13296 00:18:53.249 20:10:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 13296 ']' 00:18:53.249 20:10:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 13296 00:18:53.249 20:10:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:53.510 20:10:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:53.510 20:10:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 13296 00:18:53.510 20:10:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:53.510 20:10:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:53.511 20:10:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 13296' 00:18:53.511 killing process with pid 13296 00:18:53.511 20:10:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 13296 00:18:53.511 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.511 00:18:53.511 Latency(us) 00:18:53.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.511 =================================================================================================================== 00:18:53.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.511 20:10:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 13296 00:18:53.511 20:10:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:53.511 20:10:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:53.511 20:10:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:53.511 20:10:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:53.511 20:10:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:53.511 20:10:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:53.511 20:10:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:53.511 20:10:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:53.511 rmmod nvme_tcp 00:18:53.511 rmmod nvme_fabrics 00:18:53.511 rmmod nvme_keyring 00:18:53.511 20:10:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:53.511 20:10:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:53.511 20:10:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:53.511 20:10:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 13228 ']' 00:18:53.511 20:10:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 13228 00:18:53.511 20:10:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 13228 ']' 00:18:53.511 20:10:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 13228 00:18:53.511 20:10:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:18:53.511 20:10:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:53.772 20:10:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 13228 00:18:53.772 20:10:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:53.772 20:10:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:53.772 20:10:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 13228' 00:18:53.772 killing process with pid 13228 00:18:53.772 20:10:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 13228 00:18:53.772 [2024-05-15 20:10:46.062211] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:53.772 20:10:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 13228 00:18:53.772 20:10:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:53.772 20:10:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:53.772 20:10:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:53.772 20:10:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:53.772 20:10:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:53.772 20:10:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.773 20:10:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.773 20:10:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.319 20:10:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:56.319 00:18:56.319 real 0m21.846s 00:18:56.319 user 0m24.068s 00:18:56.319 sys 0m7.145s 00:18:56.319 20:10:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:56.319 20:10:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:56.319 ************************************ 00:18:56.319 END TEST nvmf_queue_depth 00:18:56.319 ************************************ 00:18:56.319 20:10:48 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:56.319 20:10:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:56.319 20:10:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:56.319 20:10:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:56.319 ************************************ 00:18:56.319 START TEST nvmf_target_multipath 00:18:56.319 ************************************ 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:56.319 * Looking for test storage... 00:18:56.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:56.319 20:10:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:56.320 20:10:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:04.471 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:04.471 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:04.471 Found net devices under 0000:31:00.0: cvl_0_0 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:04.471 Found net devices under 0000:31:00.1: cvl_0_1 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:04.471 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:04.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:04.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:19:04.472 00:19:04.472 --- 10.0.0.2 ping statistics --- 00:19:04.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.472 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:04.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:04.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:19:04.472 00:19:04.472 --- 10.0.0.1 ping statistics --- 00:19:04.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.472 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:04.472 only one NIC for nvmf test 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:04.472 rmmod nvme_tcp 00:19:04.472 rmmod nvme_fabrics 00:19:04.472 rmmod nvme_keyring 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.472 20:10:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.388 20:10:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:06.388 00:19:06.388 real 0m10.515s 00:19:06.389 user 0m2.248s 00:19:06.389 sys 0m6.155s 00:19:06.389 20:10:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:06.389 20:10:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:06.389 ************************************ 00:19:06.389 END TEST nvmf_target_multipath 00:19:06.389 ************************************ 00:19:06.651 20:10:58 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:06.651 20:10:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:06.651 20:10:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:06.651 20:10:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:06.651 ************************************ 00:19:06.651 START TEST nvmf_zcopy 00:19:06.651 ************************************ 00:19:06.651 20:10:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:06.651 * Looking for test storage... 00:19:06.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.651 20:10:59 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:19:06.652 20:10:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:14.795 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:14.795 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:14.795 Found net devices under 0000:31:00.0: cvl_0_0 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:14.795 Found net devices under 0000:31:00.1: cvl_0_1 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.795 20:11:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.795 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.795 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.795 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:14.795 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.795 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.795 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.795 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:14.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:19:14.796 00:19:14.796 --- 10.0.0.2 ping statistics --- 00:19:14.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.796 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:19:14.796 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.370 ms 00:19:14.796 00:19:14.796 --- 10.0.0.1 ping statistics --- 00:19:14.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.796 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:19:14.796 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.796 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:19:14.796 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:14.796 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.796 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:14.796 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:14.796 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.796 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:14.796 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:15.057 20:11:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:15.057 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:15.057 20:11:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:15.057 20:11:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:15.057 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=24745 00:19:15.057 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 24745 00:19:15.057 20:11:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:15.057 20:11:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 24745 ']' 00:19:15.057 20:11:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.057 20:11:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:15.057 20:11:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.057 20:11:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:15.057 20:11:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:15.057 [2024-05-15 20:11:07.362751] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:19:15.057 [2024-05-15 20:11:07.362825] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.057 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.057 [2024-05-15 20:11:07.440432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.057 [2024-05-15 20:11:07.513263] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.057 [2024-05-15 20:11:07.513300] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.057 [2024-05-15 20:11:07.513307] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.057 [2024-05-15 20:11:07.513318] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.057 [2024-05-15 20:11:07.513324] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.057 [2024-05-15 20:11:07.513343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:15.999 [2024-05-15 20:11:08.276338] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:15.999 [2024-05-15 20:11:08.292308] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:15.999 [2024-05-15 20:11:08.292498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:15.999 malloc0 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:15.999 { 00:19:15.999 "params": { 00:19:15.999 "name": "Nvme$subsystem", 00:19:15.999 "trtype": "$TEST_TRANSPORT", 00:19:15.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.999 "adrfam": "ipv4", 00:19:15.999 "trsvcid": "$NVMF_PORT", 00:19:15.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.999 "hdgst": ${hdgst:-false}, 00:19:15.999 "ddgst": ${ddgst:-false} 00:19:15.999 }, 00:19:15.999 "method": "bdev_nvme_attach_controller" 00:19:15.999 } 00:19:15.999 EOF 00:19:15.999 )") 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:15.999 20:11:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:15.999 "params": { 00:19:15.999 "name": "Nvme1", 00:19:15.999 "trtype": "tcp", 00:19:15.999 "traddr": "10.0.0.2", 00:19:15.999 "adrfam": "ipv4", 00:19:15.999 "trsvcid": "4420", 00:19:15.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.999 "hdgst": false, 00:19:15.999 "ddgst": false 00:19:15.999 }, 00:19:15.999 "method": "bdev_nvme_attach_controller" 00:19:15.999 }' 00:19:15.999 [2024-05-15 20:11:08.371776] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:19:15.999 [2024-05-15 20:11:08.371826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid24952 ] 00:19:15.999 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.999 [2024-05-15 20:11:08.455304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.260 [2024-05-15 20:11:08.519875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.260 Running I/O for 10 seconds... 00:19:26.267 00:19:26.267 Latency(us) 00:19:26.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.267 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:26.267 Verification LBA range: start 0x0 length 0x1000 00:19:26.267 Nvme1n1 : 10.01 6880.10 53.75 0.00 0.00 18547.26 1181.01 33423.36 00:19:26.267 =================================================================================================================== 00:19:26.267 Total : 6880.10 53.75 0.00 0.00 18547.26 1181.01 33423.36 00:19:26.529 20:11:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=26957 00:19:26.529 20:11:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:19:26.529 20:11:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:26.529 20:11:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:26.529 20:11:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:26.529 20:11:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:26.529 20:11:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:26.529 20:11:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:26.529 20:11:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:26.529 { 00:19:26.529 "params": { 00:19:26.529 "name": "Nvme$subsystem", 00:19:26.529 "trtype": "$TEST_TRANSPORT", 00:19:26.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:26.529 "adrfam": "ipv4", 00:19:26.529 "trsvcid": "$NVMF_PORT", 00:19:26.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:26.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:26.529 "hdgst": ${hdgst:-false}, 00:19:26.529 "ddgst": ${ddgst:-false} 00:19:26.529 }, 00:19:26.529 "method": "bdev_nvme_attach_controller" 00:19:26.529 } 00:19:26.529 EOF 00:19:26.529 )") 00:19:26.529 20:11:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:26.529 [2024-05-15 20:11:18.834858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.529 [2024-05-15 20:11:18.834891] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.529 20:11:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:26.529 [2024-05-15 20:11:18.842855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.529 [2024-05-15 20:11:18.842867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.529 20:11:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:26.529 20:11:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:26.529 "params": { 00:19:26.529 "name": "Nvme1", 00:19:26.529 "trtype": "tcp", 00:19:26.529 "traddr": "10.0.0.2", 00:19:26.529 "adrfam": "ipv4", 00:19:26.529 "trsvcid": "4420", 00:19:26.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.529 "hdgst": false, 00:19:26.529 "ddgst": false 00:19:26.529 }, 00:19:26.529 "method": "bdev_nvme_attach_controller" 00:19:26.529 }' 00:19:26.529 [2024-05-15 20:11:18.850875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.529 [2024-05-15 20:11:18.850886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.529 [2024-05-15 20:11:18.858895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.529 [2024-05-15 20:11:18.858905] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.529 [2024-05-15 20:11:18.866916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.529 [2024-05-15 20:11:18.866926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.529 [2024-05-15 20:11:18.872078] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:19:26.529 [2024-05-15 20:11:18.872126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid26957 ] 00:19:26.529 [2024-05-15 20:11:18.874939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.529 [2024-05-15 20:11:18.874950] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.529 [2024-05-15 20:11:18.882961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.529 [2024-05-15 20:11:18.882970] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:18.890982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:18.890992] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:18.899002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:18.899012] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.530 [2024-05-15 20:11:18.907023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:18.907033] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:18.915046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:18.915056] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:18.923069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:18.923079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:18.931091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:18.931101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:18.939112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:18.939122] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:18.947133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:18.947142] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:18.952765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.530 [2024-05-15 20:11:18.955155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:18.955165] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:18.963174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:18.963184] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:18.971195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:18.971204] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:18.979216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:18.979226] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:18.987238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:18.987250] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:18.995258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:18.995270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:19.003277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:19.003288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:19.011299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:19.011321] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:19.017539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.530 [2024-05-15 20:11:19.019323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:19.019333] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.530 [2024-05-15 20:11:19.027346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.530 [2024-05-15 20:11:19.027356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.035369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.035384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.043386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.043397] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.051406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.051416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.059425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.059435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.067447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.067457] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.075468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.075477] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.083489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.083500] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.091524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.091541] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.099538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.099550] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.107557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.107569] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.115583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.115596] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.123604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.123617] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.131624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.131636] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.139644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.139655] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.147665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.147677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.155688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.155699] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.163710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.163720] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.171735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.171750] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.179755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.179764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.187777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.187788] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.195800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.195810] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.203822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.203834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.211846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.211858] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.219869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.219881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.792 [2024-05-15 20:11:19.227888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.792 [2024-05-15 20:11:19.227898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.793 [2024-05-15 20:11:19.235910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.793 [2024-05-15 20:11:19.235920] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.793 [2024-05-15 20:11:19.243933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.793 [2024-05-15 20:11:19.243943] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.793 [2024-05-15 20:11:19.251954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.793 [2024-05-15 20:11:19.251964] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.793 [2024-05-15 20:11:19.259974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.793 [2024-05-15 20:11:19.259985] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.793 [2024-05-15 20:11:19.267999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.793 [2024-05-15 20:11:19.268013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.793 [2024-05-15 20:11:19.276022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.793 [2024-05-15 20:11:19.276040] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.793 Running I/O for 5 seconds... 00:19:26.793 [2024-05-15 20:11:19.284038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.793 [2024-05-15 20:11:19.284048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.297399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.297419] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.305650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.305668] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.317376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.317399] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.325866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.325885] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.337593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.337612] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.346093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.346112] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.356094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.356113] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.365214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.365232] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.374854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.374873] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.384397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.384416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.393868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.393887] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.403274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.403292] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.412977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.412995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.422498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.422516] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.431908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.431927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.441416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.441435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.451018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.451036] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.460415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.460434] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.470000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.470018] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.479564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.479582] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.489046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.489065] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.498183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.498205] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.507757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.507776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.517245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.517262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.527308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.527331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.538769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.538788] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.055 [2024-05-15 20:11:19.547300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.055 [2024-05-15 20:11:19.547323] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.557497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.557515] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.566644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.566662] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.576588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.576606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.587935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.587953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.596274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.596291] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.607900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.607918] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.616141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.616158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.627495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.627514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.636023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.636040] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.645928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.645946] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.654734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.654753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.664750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.664768] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.674441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.674459] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.683852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.683874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.693237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.693255] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.702331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.702349] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.711954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.711972] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.721516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.317 [2024-05-15 20:11:19.721534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.317 [2024-05-15 20:11:19.731060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.318 [2024-05-15 20:11:19.731078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.318 [2024-05-15 20:11:19.740466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.318 [2024-05-15 20:11:19.740484] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.318 [2024-05-15 20:11:19.749910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.318 [2024-05-15 20:11:19.749928] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.318 [2024-05-15 20:11:19.759393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.318 [2024-05-15 20:11:19.759411] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.318 [2024-05-15 20:11:19.768754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.318 [2024-05-15 20:11:19.768772] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.318 [2024-05-15 20:11:19.778066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.318 [2024-05-15 20:11:19.778084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.318 [2024-05-15 20:11:19.787376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.318 [2024-05-15 20:11:19.787395] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.318 [2024-05-15 20:11:19.796789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.318 [2024-05-15 20:11:19.796807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.318 [2024-05-15 20:11:19.805858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.318 [2024-05-15 20:11:19.805876] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.318 [2024-05-15 20:11:19.815466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.318 [2024-05-15 20:11:19.815484] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.825126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.579 [2024-05-15 20:11:19.825143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.834438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.579 [2024-05-15 20:11:19.834456] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.843995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.579 [2024-05-15 20:11:19.844012] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.853613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.579 [2024-05-15 20:11:19.853633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.863218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.579 [2024-05-15 20:11:19.863236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.872415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.579 [2024-05-15 20:11:19.872432] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.882395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.579 [2024-05-15 20:11:19.882413] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.893658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.579 [2024-05-15 20:11:19.893675] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.902060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.579 [2024-05-15 20:11:19.902077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.911725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.579 [2024-05-15 20:11:19.911743] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.921367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.579 [2024-05-15 20:11:19.921385] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.930957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.579 [2024-05-15 20:11:19.930975] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.940324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.579 [2024-05-15 20:11:19.940341] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.949885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.579 [2024-05-15 20:11:19.949904] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.959352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.579 [2024-05-15 20:11:19.959370] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.579 [2024-05-15 20:11:19.968697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.580 [2024-05-15 20:11:19.968715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.580 [2024-05-15 20:11:19.978001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.580 [2024-05-15 20:11:19.978019] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.580 [2024-05-15 20:11:19.987468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.580 [2024-05-15 20:11:19.987486] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.580 [2024-05-15 20:11:19.996889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.580 [2024-05-15 20:11:19.996907] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.580 [2024-05-15 20:11:20.011168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.580 [2024-05-15 20:11:20.011187] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.580 [2024-05-15 20:11:20.027340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.580 [2024-05-15 20:11:20.027358] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.580 [2024-05-15 20:11:20.044502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.580 [2024-05-15 20:11:20.044521] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.580 [2024-05-15 20:11:20.062232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.580 [2024-05-15 20:11:20.062251] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.580 [2024-05-15 20:11:20.078398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.580 [2024-05-15 20:11:20.078416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.841 [2024-05-15 20:11:20.095696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.841 [2024-05-15 20:11:20.095715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.842 [2024-05-15 20:11:20.112282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.842 [2024-05-15 20:11:20.112300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.842 [2024-05-15 20:11:20.129885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.842 [2024-05-15 20:11:20.129904] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.842 [2024-05-15 20:11:20.145923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.842 [2024-05-15 20:11:20.145941] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.842 [2024-05-15 20:11:20.156911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.842 [2024-05-15 20:11:20.156928] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.842 [2024-05-15 20:11:20.172924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.842 [2024-05-15 20:11:20.172943] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.842 [2024-05-15 20:11:20.190061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.842 [2024-05-15 20:11:20.190078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.842 [2024-05-15 20:11:20.206069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.842 [2024-05-15 20:11:20.206086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.842 [2024-05-15 20:11:20.222687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.842 [2024-05-15 20:11:20.222705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.842 [2024-05-15 20:11:20.239917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.842 [2024-05-15 20:11:20.239935] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.842 [2024-05-15 20:11:20.257013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.842 [2024-05-15 20:11:20.257031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.842 [2024-05-15 20:11:20.274062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.842 [2024-05-15 20:11:20.274080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.842 [2024-05-15 20:11:20.291239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.842 [2024-05-15 20:11:20.291257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.842 [2024-05-15 20:11:20.307555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.842 [2024-05-15 20:11:20.307573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.842 [2024-05-15 20:11:20.324959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.842 [2024-05-15 20:11:20.324977] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.842 [2024-05-15 20:11:20.340331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.842 [2024-05-15 20:11:20.340350] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.351601] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.351619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.368598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.368615] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.385554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.385572] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.401998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.402016] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.419688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.419706] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.436655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.436673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.453847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.453865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.468950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.468968] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.479913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.479931] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.497123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.497141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.513892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.513910] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.531458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.531475] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.548172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.548190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.565380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.565397] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.582287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.582305] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.103 [2024-05-15 20:11:20.599545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.103 [2024-05-15 20:11:20.599562] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.365 [2024-05-15 20:11:20.616908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.365 [2024-05-15 20:11:20.616927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.365 [2024-05-15 20:11:20.633798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.365 [2024-05-15 20:11:20.633816] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.365 [2024-05-15 20:11:20.651286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.365 [2024-05-15 20:11:20.651304] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.365 [2024-05-15 20:11:20.668368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.365 [2024-05-15 20:11:20.668387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.365 [2024-05-15 20:11:20.684814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.365 [2024-05-15 20:11:20.684832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.365 [2024-05-15 20:11:20.701968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.365 [2024-05-15 20:11:20.701986] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.365 [2024-05-15 20:11:20.718783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.365 [2024-05-15 20:11:20.718801] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.365 [2024-05-15 20:11:20.736293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.365 [2024-05-15 20:11:20.736311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.365 [2024-05-15 20:11:20.752282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.365 [2024-05-15 20:11:20.752301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.365 [2024-05-15 20:11:20.769684] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.365 [2024-05-15 20:11:20.769702] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.365 [2024-05-15 20:11:20.786916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.365 [2024-05-15 20:11:20.786934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.365 [2024-05-15 20:11:20.803762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.365 [2024-05-15 20:11:20.803780] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.365 [2024-05-15 20:11:20.820885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.365 [2024-05-15 20:11:20.820904] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.365 [2024-05-15 20:11:20.837897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.365 [2024-05-15 20:11:20.837916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.365 [2024-05-15 20:11:20.854938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.365 [2024-05-15 20:11:20.854957] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:20.871750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:20.871769] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:20.889092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:20.889110] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:20.906401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:20.906420] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:20.921531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:20.921551] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:20.936513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:20.936532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:20.947249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:20.947267] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:20.964200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:20.964218] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:20.981502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:20.981521] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:20.998082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:20.998106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:21.015370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:21.015389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:21.032734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:21.032753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:21.049967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:21.049986] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:21.067119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:21.067138] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:21.084404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:21.084423] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:21.101686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:21.101704] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.627 [2024-05-15 20:11:21.117667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.627 [2024-05-15 20:11:21.117685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.135297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.135322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.151539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.151558] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.169090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.169109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.185528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.185546] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.202874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.202894] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.218710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.218729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.229979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.229998] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.246189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.246207] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.262709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.262728] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.274157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.274175] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.290789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.290809] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.307445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.307468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.325170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.325190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.342541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.342561] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.357695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.357713] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.368694] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.368713] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.889 [2024-05-15 20:11:21.384613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.889 [2024-05-15 20:11:21.384631] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.150 [2024-05-15 20:11:21.401302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.150 [2024-05-15 20:11:21.401326] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.150 [2024-05-15 20:11:21.418731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.150 [2024-05-15 20:11:21.418750] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.150 [2024-05-15 20:11:21.434367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.150 [2024-05-15 20:11:21.434385] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.150 [2024-05-15 20:11:21.445755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.151 [2024-05-15 20:11:21.445773] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.151 [2024-05-15 20:11:21.462468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.151 [2024-05-15 20:11:21.462486] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.151 [2024-05-15 20:11:21.479478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.151 [2024-05-15 20:11:21.479496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.151 [2024-05-15 20:11:21.496632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.151 [2024-05-15 20:11:21.496650] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.151 [2024-05-15 20:11:21.513529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.151 [2024-05-15 20:11:21.513548] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.151 [2024-05-15 20:11:21.530561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.151 [2024-05-15 20:11:21.530579] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.151 [2024-05-15 20:11:21.546774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.151 [2024-05-15 20:11:21.546792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.151 [2024-05-15 20:11:21.557939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.151 [2024-05-15 20:11:21.557957] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.151 [2024-05-15 20:11:21.574474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.151 [2024-05-15 20:11:21.574492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.151 [2024-05-15 20:11:21.591389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.151 [2024-05-15 20:11:21.591406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.151 [2024-05-15 20:11:21.608875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.151 [2024-05-15 20:11:21.608915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.151 [2024-05-15 20:11:21.625912] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.151 [2024-05-15 20:11:21.625929] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.151 [2024-05-15 20:11:21.643480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.151 [2024-05-15 20:11:21.643498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.412 [2024-05-15 20:11:21.659481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.412 [2024-05-15 20:11:21.659499] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.412 [2024-05-15 20:11:21.676769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.412 [2024-05-15 20:11:21.676787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.412 [2024-05-15 20:11:21.693565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.412 [2024-05-15 20:11:21.693583] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.412 [2024-05-15 20:11:21.709931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.412 [2024-05-15 20:11:21.709948] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.412 [2024-05-15 20:11:21.727335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.412 [2024-05-15 20:11:21.727353] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.412 [2024-05-15 20:11:21.744086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.412 [2024-05-15 20:11:21.744105] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.412 [2024-05-15 20:11:21.761347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.412 [2024-05-15 20:11:21.761365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.412 [2024-05-15 20:11:21.777988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.412 [2024-05-15 20:11:21.778006] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.412 [2024-05-15 20:11:21.795757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.412 [2024-05-15 20:11:21.795775] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.412 [2024-05-15 20:11:21.812657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.412 [2024-05-15 20:11:21.812675] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.412 [2024-05-15 20:11:21.830177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.412 [2024-05-15 20:11:21.830195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.413 [2024-05-15 20:11:21.846992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.413 [2024-05-15 20:11:21.847010] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.413 [2024-05-15 20:11:21.863826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.413 [2024-05-15 20:11:21.863844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.413 [2024-05-15 20:11:21.880835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.413 [2024-05-15 20:11:21.880853] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.413 [2024-05-15 20:11:21.897786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.413 [2024-05-15 20:11:21.897803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:21.914724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:21.914742] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:21.931188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:21.931210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:21.948620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:21.948638] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:21.964663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:21.964681] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:21.975967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:21.975985] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:21.991878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:21.991896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:22.009100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:22.009118] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:22.026128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:22.026146] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:22.043805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:22.043823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:22.060417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:22.060435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:22.077486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:22.077504] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:22.094438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:22.094456] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:22.111491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:22.111509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:22.127721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:22.127739] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:22.145259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:22.145277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:22.161382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:22.161401] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.674 [2024-05-15 20:11:22.172666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.674 [2024-05-15 20:11:22.172684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.188818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.188836] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.205461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.205479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.222509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.222527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.238216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.238233] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.255492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.255510] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.272853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.272871] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.289242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.289260] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.306294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.306317] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.322696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.322714] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.340035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.340052] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.357308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.357331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.373248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.373266] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.384164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.384183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.400574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.400592] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.417100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.417118] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.935 [2024-05-15 20:11:22.434389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.935 [2024-05-15 20:11:22.434408] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.196 [2024-05-15 20:11:22.451499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.196 [2024-05-15 20:11:22.451517] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.196 [2024-05-15 20:11:22.467991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.196 [2024-05-15 20:11:22.468009] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.196 [2024-05-15 20:11:22.484122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.196 [2024-05-15 20:11:22.484140] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.196 [2024-05-15 20:11:22.501434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.196 [2024-05-15 20:11:22.501452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.196 [2024-05-15 20:11:22.518458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.196 [2024-05-15 20:11:22.518476] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.196 [2024-05-15 20:11:22.534744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.196 [2024-05-15 20:11:22.534762] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.196 [2024-05-15 20:11:22.545954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.196 [2024-05-15 20:11:22.545972] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.196 [2024-05-15 20:11:22.562715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.196 [2024-05-15 20:11:22.562733] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.196 [2024-05-15 20:11:22.578775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.196 [2024-05-15 20:11:22.578793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.196 [2024-05-15 20:11:22.595763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.196 [2024-05-15 20:11:22.595781] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.196 [2024-05-15 20:11:22.612840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.196 [2024-05-15 20:11:22.612859] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.196 [2024-05-15 20:11:22.629705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.196 [2024-05-15 20:11:22.629723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.197 [2024-05-15 20:11:22.646576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.197 [2024-05-15 20:11:22.646594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.197 [2024-05-15 20:11:22.663425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.197 [2024-05-15 20:11:22.663443] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.197 [2024-05-15 20:11:22.680557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.197 [2024-05-15 20:11:22.680576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.697842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.697860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.714826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.714844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.731765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.731783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.748610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.748629] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.765654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.765672] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.782843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.782862] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.799258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.799277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.816784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.816802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.833094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.833112] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.850466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.850484] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.866858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.866876] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.884020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.884039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.900780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.900798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.917894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.917912] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.935471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.935490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.458 [2024-05-15 20:11:22.952678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.458 [2024-05-15 20:11:22.952697] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:22.969853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:22.969873] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:22.987008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:22.987028] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:23.004588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:23.004606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:23.020320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:23.020338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:23.031181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:23.031200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:23.048021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:23.048039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:23.063929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:23.063947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:23.075319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:23.075337] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:23.092210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:23.092228] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:23.109477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:23.109495] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:23.126677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:23.126696] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:23.143707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:23.143726] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:23.160920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:23.160938] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:23.177432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:23.177450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:23.194650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:23.194667] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.719 [2024-05-15 20:11:23.212054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.719 [2024-05-15 20:11:23.212072] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.227986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.228005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.238695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.238713] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.254833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.254852] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.272004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.272023] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.287320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.287338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.303830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.303848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.320891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.320909] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.338453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.338472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.355603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.355621] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.370812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.370831] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.382311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.382333] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.398839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.398857] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.414699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.414718] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.426403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.426421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.442617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.442635] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.459424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.459446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.980 [2024-05-15 20:11:23.476090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.980 [2024-05-15 20:11:23.476109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.240 [2024-05-15 20:11:23.493093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.240 [2024-05-15 20:11:23.493111] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.240 [2024-05-15 20:11:23.509908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.240 [2024-05-15 20:11:23.509926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.240 [2024-05-15 20:11:23.526878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.240 [2024-05-15 20:11:23.526897] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.240 [2024-05-15 20:11:23.543833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.240 [2024-05-15 20:11:23.543851] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.240 [2024-05-15 20:11:23.560641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.240 [2024-05-15 20:11:23.560659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.240 [2024-05-15 20:11:23.577595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.240 [2024-05-15 20:11:23.577614] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.240 [2024-05-15 20:11:23.594272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.240 [2024-05-15 20:11:23.594289] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.240 [2024-05-15 20:11:23.611610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.240 [2024-05-15 20:11:23.611628] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.240 [2024-05-15 20:11:23.629268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.240 [2024-05-15 20:11:23.629286] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.240 [2024-05-15 20:11:23.645990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.240 [2024-05-15 20:11:23.646008] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.240 [2024-05-15 20:11:23.663642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.240 [2024-05-15 20:11:23.663660] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.240 [2024-05-15 20:11:23.680168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.240 [2024-05-15 20:11:23.680186] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.240 [2024-05-15 20:11:23.697524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.240 [2024-05-15 20:11:23.697542] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.240 [2024-05-15 20:11:23.713059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.240 [2024-05-15 20:11:23.713076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.240 [2024-05-15 20:11:23.730451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.240 [2024-05-15 20:11:23.730469] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.747409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.747427] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.763131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.763148] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.774353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.774376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.790142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.790161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.807379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.807397] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.824276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.824294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.841046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.841063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.857596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.857614] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.874532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.874549] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.892255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.892273] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.908542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.908559] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.925983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.926001] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.941642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.941660] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.952629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.952647] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.969042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.969060] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.502 [2024-05-15 20:11:23.986090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.502 [2024-05-15 20:11:23.986109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.003942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.003961] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.018571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.018590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.035207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.035226] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.051086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.051105] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.067995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.068014] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.084764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.084787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.101778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.101796] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.118691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.118710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.135808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.135826] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.153016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.153034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.168736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.168755] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.179840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.179858] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.196610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.196628] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.213641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.213659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.230452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.230471] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.763 [2024-05-15 20:11:24.247893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.763 [2024-05-15 20:11:24.247911] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.024 [2024-05-15 20:11:24.264493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.024 [2024-05-15 20:11:24.264512] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.024 [2024-05-15 20:11:24.281254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.024 [2024-05-15 20:11:24.281273] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.024 [2024-05-15 20:11:24.298218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.024 [2024-05-15 20:11:24.298237] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.024 00:19:32.024 Latency(us) 00:19:32.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.024 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:32.024 Nvme1n1 : 5.01 13559.70 105.94 0.00 0.00 9428.71 4150.61 19879.25 00:19:32.024 =================================================================================================================== 00:19:32.024 Total : 13559.70 105.94 0.00 0.00 9428.71 4150.61 19879.25 00:19:32.024 [2024-05-15 20:11:24.309556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.024 [2024-05-15 20:11:24.309574] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.024 [2024-05-15 20:11:24.321590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.024 [2024-05-15 20:11:24.321605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.024 [2024-05-15 20:11:24.333618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.024 [2024-05-15 20:11:24.333639] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.024 [2024-05-15 20:11:24.345647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.024 [2024-05-15 20:11:24.345662] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.024 [2024-05-15 20:11:24.357680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.024 [2024-05-15 20:11:24.357692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.024 [2024-05-15 20:11:24.369710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.024 [2024-05-15 20:11:24.369722] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.024 [2024-05-15 20:11:24.381743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.024 [2024-05-15 20:11:24.381753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.024 [2024-05-15 20:11:24.393774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.024 [2024-05-15 20:11:24.393787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.024 [2024-05-15 20:11:24.405806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.024 [2024-05-15 20:11:24.405817] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.024 [2024-05-15 20:11:24.417838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.024 [2024-05-15 20:11:24.417852] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.024 [2024-05-15 20:11:24.429868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.024 [2024-05-15 20:11:24.429878] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (26957) - No such process 00:19:32.024 20:11:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 26957 00:19:32.025 20:11:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:32.025 20:11:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.025 20:11:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:32.025 20:11:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.025 20:11:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:32.025 20:11:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.025 20:11:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:32.025 delay0 00:19:32.025 20:11:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.025 20:11:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:32.025 20:11:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.025 20:11:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:32.025 20:11:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.025 20:11:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:32.025 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.285 [2024-05-15 20:11:24.624522] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:38.872 [2024-05-15 20:11:30.823606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1537390 is same with the state(5) to be set 00:19:38.872 Initializing NVMe Controllers 00:19:38.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:38.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:38.872 Initialization complete. Launching workers. 00:19:38.872 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 135 00:19:38.872 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 420, failed to submit 35 00:19:38.872 success 230, unsuccess 190, failed 0 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:38.872 rmmod nvme_tcp 00:19:38.872 rmmod nvme_fabrics 00:19:38.872 rmmod nvme_keyring 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 24745 ']' 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 24745 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 24745 ']' 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 24745 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 24745 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 24745' 00:19:38.872 killing process with pid 24745 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 24745 00:19:38.872 [2024-05-15 20:11:30.948648] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:38.872 20:11:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 24745 00:19:38.872 20:11:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:38.872 20:11:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:38.872 20:11:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:38.872 20:11:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:38.872 20:11:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:38.872 20:11:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.872 20:11:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.872 20:11:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.861 20:11:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:40.861 00:19:40.861 real 0m34.192s 00:19:40.861 user 0m45.151s 00:19:40.861 sys 0m10.836s 00:19:40.861 20:11:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:40.861 20:11:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:40.861 ************************************ 00:19:40.861 END TEST nvmf_zcopy 00:19:40.861 ************************************ 00:19:40.861 20:11:33 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:40.861 20:11:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:40.861 20:11:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:40.861 20:11:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:40.861 ************************************ 00:19:40.861 START TEST nvmf_nmic 00:19:40.861 ************************************ 00:19:40.861 20:11:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:40.861 * Looking for test storage... 00:19:40.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:40.861 20:11:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:40.861 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:40.861 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.861 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.861 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.861 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.861 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.861 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.861 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.861 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.861 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.861 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.123 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:41.123 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:41.123 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:41.124 20:11:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:49.271 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:49.271 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:49.271 Found net devices under 0000:31:00.0: cvl_0_0 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.271 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:49.271 Found net devices under 0000:31:00.1: cvl_0_1 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:49.272 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:49.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:19:49.533 00:19:49.533 --- 10.0.0.2 ping statistics --- 00:19:49.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.533 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:19:49.533 00:19:49.533 --- 10.0.0.1 ping statistics --- 00:19:49.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.533 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=34032 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 34032 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 34032 ']' 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:49.533 20:11:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:49.533 [2024-05-15 20:11:41.933775] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:19:49.533 [2024-05-15 20:11:41.933844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.533 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.533 [2024-05-15 20:11:42.029383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:49.795 [2024-05-15 20:11:42.128127] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.795 [2024-05-15 20:11:42.128188] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.795 [2024-05-15 20:11:42.128197] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.795 [2024-05-15 20:11:42.128203] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.795 [2024-05-15 20:11:42.128209] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.795 [2024-05-15 20:11:42.128361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.795 [2024-05-15 20:11:42.128436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.795 [2024-05-15 20:11:42.128662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.795 [2024-05-15 20:11:42.128664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.367 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:50.367 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:19:50.367 20:11:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:50.367 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:50.367 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:50.367 20:11:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.367 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:50.367 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.367 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:50.367 [2024-05-15 20:11:42.866052] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:50.627 Malloc0 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:50.627 [2024-05-15 20:11:42.922698] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:50.627 [2024-05-15 20:11:42.922923] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:50.627 test case1: single bdev can't be used in multiple subsystems 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:50.627 [2024-05-15 20:11:42.958857] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:50.627 [2024-05-15 20:11:42.958874] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:50.627 [2024-05-15 20:11:42.958881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:50.627 request: 00:19:50.627 { 00:19:50.627 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:50.627 "namespace": { 00:19:50.627 "bdev_name": "Malloc0", 00:19:50.627 "no_auto_visible": false 00:19:50.627 }, 00:19:50.627 "method": "nvmf_subsystem_add_ns", 00:19:50.627 "req_id": 1 00:19:50.627 } 00:19:50.627 Got JSON-RPC error response 00:19:50.627 response: 00:19:50.627 { 00:19:50.627 "code": -32602, 00:19:50.627 "message": "Invalid parameters" 00:19:50.627 } 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:50.627 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:50.627 Adding namespace failed - expected result. 00:19:50.628 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:50.628 test case2: host connect to nvmf target in multiple paths 00:19:50.628 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:50.628 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.628 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:50.628 [2024-05-15 20:11:42.970978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:50.628 20:11:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.628 20:11:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:52.012 20:11:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:53.926 20:11:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:53.926 20:11:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:19:53.926 20:11:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:53.926 20:11:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:19:53.926 20:11:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:19:55.849 20:11:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:55.849 20:11:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:55.849 20:11:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:55.849 20:11:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:19:55.849 20:11:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:55.849 20:11:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:19:55.849 20:11:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:55.849 [global] 00:19:55.849 thread=1 00:19:55.849 invalidate=1 00:19:55.849 rw=write 00:19:55.849 time_based=1 00:19:55.849 runtime=1 00:19:55.849 ioengine=libaio 00:19:55.849 direct=1 00:19:55.849 bs=4096 00:19:55.849 iodepth=1 00:19:55.849 norandommap=0 00:19:55.849 numjobs=1 00:19:55.849 00:19:55.849 verify_dump=1 00:19:55.849 verify_backlog=512 00:19:55.849 verify_state_save=0 00:19:55.849 do_verify=1 00:19:55.849 verify=crc32c-intel 00:19:55.849 [job0] 00:19:55.849 filename=/dev/nvme0n1 00:19:55.849 Could not set queue depth (nvme0n1) 00:19:56.110 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:56.110 fio-3.35 00:19:56.110 Starting 1 thread 00:19:57.497 00:19:57.497 job0: (groupid=0, jobs=1): err= 0: pid=35524: Wed May 15 20:11:49 2024 00:19:57.497 read: IOPS=487, BW=1950KiB/s (1997kB/s)(1952KiB/1001msec) 00:19:57.497 slat (nsec): min=25264, max=58031, avg=26645.34, stdev=3772.46 00:19:57.497 clat (usec): min=1024, max=1390, avg=1228.95, stdev=57.27 00:19:57.497 lat (usec): min=1050, max=1416, avg=1255.59, stdev=57.56 00:19:57.497 clat percentiles (usec): 00:19:57.497 | 1.00th=[ 1057], 5.00th=[ 1139], 10.00th=[ 1156], 20.00th=[ 1188], 00:19:57.497 | 30.00th=[ 1205], 40.00th=[ 1221], 50.00th=[ 1237], 60.00th=[ 1254], 00:19:57.497 | 70.00th=[ 1254], 80.00th=[ 1270], 90.00th=[ 1287], 95.00th=[ 1303], 00:19:57.497 | 99.00th=[ 1385], 99.50th=[ 1385], 99.90th=[ 1385], 99.95th=[ 1385], 00:19:57.497 | 99.99th=[ 1385] 00:19:57.497 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:57.498 slat (nsec): min=9156, max=66566, avg=30100.72, stdev=9451.26 00:19:57.498 clat (usec): min=339, max=1132, avg=710.42, stdev=99.09 00:19:57.498 lat (usec): min=351, max=1149, avg=740.52, stdev=103.12 00:19:57.498 clat percentiles (usec): 00:19:57.498 | 1.00th=[ 445], 5.00th=[ 529], 10.00th=[ 586], 20.00th=[ 644], 00:19:57.498 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[ 709], 60.00th=[ 742], 00:19:57.498 | 70.00th=[ 766], 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 857], 00:19:57.498 | 99.00th=[ 889], 99.50th=[ 922], 99.90th=[ 1139], 99.95th=[ 1139], 00:19:57.498 | 99.99th=[ 1139] 00:19:57.498 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:57.498 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:57.498 lat (usec) : 500=1.60%, 750=31.90%, 1000=17.60% 00:19:57.498 lat (msec) : 2=48.90% 00:19:57.498 cpu : usr=1.70%, sys=4.30%, ctx=1000, majf=0, minf=1 00:19:57.498 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:57.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.498 issued rwts: total=488,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.498 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:57.498 00:19:57.498 Run status group 0 (all jobs): 00:19:57.498 READ: bw=1950KiB/s (1997kB/s), 1950KiB/s-1950KiB/s (1997kB/s-1997kB/s), io=1952KiB (1999kB), run=1001-1001msec 00:19:57.498 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:19:57.498 00:19:57.498 Disk stats (read/write): 00:19:57.498 nvme0n1: ios=458/512, merge=0/0, ticks=595/293, in_queue=888, util=98.10% 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:57.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:57.498 rmmod nvme_tcp 00:19:57.498 rmmod nvme_fabrics 00:19:57.498 rmmod nvme_keyring 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 34032 ']' 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 34032 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 34032 ']' 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 34032 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 34032 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 34032' 00:19:57.498 killing process with pid 34032 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 34032 00:19:57.498 [2024-05-15 20:11:49.863615] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:57.498 20:11:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 34032 00:19:57.760 20:11:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:57.760 20:11:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:57.760 20:11:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:57.760 20:11:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.760 20:11:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:57.760 20:11:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.760 20:11:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.760 20:11:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.676 20:11:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:59.676 00:19:59.676 real 0m18.837s 00:19:59.676 user 0m45.733s 00:19:59.676 sys 0m7.272s 00:19:59.676 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:59.676 20:11:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:59.676 ************************************ 00:19:59.676 END TEST nvmf_nmic 00:19:59.676 ************************************ 00:19:59.676 20:11:52 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:59.676 20:11:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:59.676 20:11:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:59.676 20:11:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:59.676 ************************************ 00:19:59.676 START TEST nvmf_fio_target 00:19:59.676 ************************************ 00:19:59.676 20:11:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:59.938 * Looking for test storage... 00:19:59.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:59.938 20:11:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:59.939 20:11:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:59.939 20:11:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:59.939 20:11:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:59.939 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:59.939 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.939 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:59.939 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:59.939 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:59.939 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.939 20:11:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.939 20:11:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.939 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:59.939 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:59.939 20:11:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:59.939 20:11:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:08.097 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:08.097 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:08.097 Found net devices under 0000:31:00.0: cvl_0_0 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:08.097 Found net devices under 0000:31:00.1: cvl_0_1 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:08.097 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.358 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.358 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.358 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:08.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:20:08.358 00:20:08.358 --- 10.0.0.2 ping statistics --- 00:20:08.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.358 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:20:08.359 00:20:08.359 --- 10.0.0.1 ping statistics --- 00:20:08.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.359 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=40585 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 40585 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 40585 ']' 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:08.359 20:12:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.359 [2024-05-15 20:12:00.795311] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:20:08.359 [2024-05-15 20:12:00.795388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.359 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.619 [2024-05-15 20:12:00.892873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:08.619 [2024-05-15 20:12:00.986432] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.619 [2024-05-15 20:12:00.986487] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.619 [2024-05-15 20:12:00.986496] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.619 [2024-05-15 20:12:00.986503] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.619 [2024-05-15 20:12:00.986509] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.619 [2024-05-15 20:12:00.986562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.619 [2024-05-15 20:12:00.986700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.619 [2024-05-15 20:12:00.986863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.619 [2024-05-15 20:12:00.986863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:09.188 20:12:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:09.188 20:12:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:20:09.188 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:09.188 20:12:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.188 20:12:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.449 20:12:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.449 20:12:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:09.449 [2024-05-15 20:12:01.912784] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.449 20:12:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:09.709 20:12:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:09.709 20:12:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:09.970 20:12:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:09.970 20:12:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:10.230 20:12:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:10.230 20:12:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:10.491 20:12:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:10.491 20:12:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:10.751 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:11.012 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:11.012 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:11.273 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:11.273 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:11.273 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:11.273 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:11.534 20:12:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:11.795 20:12:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:11.795 20:12:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:12.055 20:12:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:12.055 20:12:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:12.316 20:12:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:12.316 [2024-05-15 20:12:04.816195] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:12.316 [2024-05-15 20:12:04.816473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.577 20:12:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:12.577 20:12:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:12.848 20:12:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:14.763 20:12:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:14.763 20:12:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:20:14.763 20:12:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:20:14.763 20:12:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:20:14.763 20:12:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:20:14.763 20:12:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:20:16.676 20:12:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:20:16.676 20:12:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:20:16.676 20:12:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:20:16.676 20:12:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:20:16.676 20:12:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:20:16.677 20:12:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:20:16.677 20:12:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:16.677 [global] 00:20:16.677 thread=1 00:20:16.677 invalidate=1 00:20:16.677 rw=write 00:20:16.677 time_based=1 00:20:16.677 runtime=1 00:20:16.677 ioengine=libaio 00:20:16.677 direct=1 00:20:16.677 bs=4096 00:20:16.677 iodepth=1 00:20:16.677 norandommap=0 00:20:16.677 numjobs=1 00:20:16.677 00:20:16.677 verify_dump=1 00:20:16.677 verify_backlog=512 00:20:16.677 verify_state_save=0 00:20:16.677 do_verify=1 00:20:16.677 verify=crc32c-intel 00:20:16.677 [job0] 00:20:16.677 filename=/dev/nvme0n1 00:20:16.677 [job1] 00:20:16.677 filename=/dev/nvme0n2 00:20:16.677 [job2] 00:20:16.677 filename=/dev/nvme0n3 00:20:16.677 [job3] 00:20:16.677 filename=/dev/nvme0n4 00:20:16.677 Could not set queue depth (nvme0n1) 00:20:16.677 Could not set queue depth (nvme0n2) 00:20:16.677 Could not set queue depth (nvme0n3) 00:20:16.677 Could not set queue depth (nvme0n4) 00:20:16.937 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:16.937 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:16.937 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:16.937 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:16.937 fio-3.35 00:20:16.937 Starting 4 threads 00:20:18.322 00:20:18.322 job0: (groupid=0, jobs=1): err= 0: pid=42980: Wed May 15 20:12:10 2024 00:20:18.322 read: IOPS=17, BW=70.2KiB/s (71.9kB/s)(72.0KiB/1026msec) 00:20:18.322 slat (nsec): min=10146, max=28079, avg=24487.89, stdev=3700.63 00:20:18.322 clat (usec): min=1105, max=42195, avg=35189.35, stdev=15571.45 00:20:18.322 lat (usec): min=1130, max=42205, avg=35213.83, stdev=15570.40 00:20:18.322 clat percentiles (usec): 00:20:18.322 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[ 1139], 20.00th=[41681], 00:20:18.322 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:20:18.322 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:18.322 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:18.322 | 99.99th=[42206] 00:20:18.322 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:20:18.322 slat (nsec): min=9753, max=67401, avg=30562.42, stdev=10378.31 00:20:18.322 clat (usec): min=371, max=1029, avg=720.54, stdev=139.44 00:20:18.322 lat (usec): min=382, max=1063, avg=751.10, stdev=143.25 00:20:18.322 clat percentiles (usec): 00:20:18.322 | 1.00th=[ 400], 5.00th=[ 478], 10.00th=[ 506], 20.00th=[ 603], 00:20:18.322 | 30.00th=[ 660], 40.00th=[ 701], 50.00th=[ 734], 60.00th=[ 775], 00:20:18.322 | 70.00th=[ 816], 80.00th=[ 840], 90.00th=[ 881], 95.00th=[ 922], 00:20:18.322 | 99.00th=[ 988], 99.50th=[ 1012], 99.90th=[ 1029], 99.95th=[ 1029], 00:20:18.322 | 99.99th=[ 1029] 00:20:18.322 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:20:18.322 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:18.322 lat (usec) : 500=8.30%, 750=44.72%, 1000=42.83% 00:20:18.322 lat (msec) : 2=1.32%, 50=2.83% 00:20:18.322 cpu : usr=0.59%, sys=1.56%, ctx=534, majf=0, minf=1 00:20:18.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.322 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:18.322 job1: (groupid=0, jobs=1): err= 0: pid=42985: Wed May 15 20:12:10 2024 00:20:18.322 read: IOPS=482, BW=1928KiB/s (1974kB/s)(1932KiB/1002msec) 00:20:18.322 slat (nsec): min=7728, max=73838, avg=27271.86, stdev=4298.50 00:20:18.322 clat (usec): min=551, max=1732, avg=1195.61, stdev=101.63 00:20:18.322 lat (usec): min=568, max=1759, avg=1222.89, stdev=102.09 00:20:18.322 clat percentiles (usec): 00:20:18.322 | 1.00th=[ 775], 5.00th=[ 1037], 10.00th=[ 1106], 20.00th=[ 1139], 00:20:18.322 | 30.00th=[ 1172], 40.00th=[ 1188], 50.00th=[ 1205], 60.00th=[ 1221], 00:20:18.322 | 70.00th=[ 1237], 80.00th=[ 1254], 90.00th=[ 1287], 95.00th=[ 1303], 00:20:18.322 | 99.00th=[ 1369], 99.50th=[ 1500], 99.90th=[ 1729], 99.95th=[ 1729], 00:20:18.322 | 99.99th=[ 1729] 00:20:18.322 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:20:18.322 slat (nsec): min=10167, max=68403, avg=34682.95, stdev=10196.39 00:20:18.322 clat (usec): min=377, max=1259, avg=741.22, stdev=110.70 00:20:18.322 lat (usec): min=388, max=1307, avg=775.90, stdev=114.26 00:20:18.322 clat percentiles (usec): 00:20:18.322 | 1.00th=[ 474], 5.00th=[ 570], 10.00th=[ 594], 20.00th=[ 652], 00:20:18.322 | 30.00th=[ 676], 40.00th=[ 717], 50.00th=[ 750], 60.00th=[ 775], 00:20:18.322 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 889], 95.00th=[ 914], 00:20:18.322 | 99.00th=[ 955], 99.50th=[ 971], 99.90th=[ 1254], 99.95th=[ 1254], 00:20:18.322 | 99.99th=[ 1254] 00:20:18.322 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:20:18.322 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:18.322 lat (usec) : 500=1.01%, 750=26.13%, 1000=25.63% 00:20:18.322 lat (msec) : 2=47.24% 00:20:18.322 cpu : usr=2.80%, sys=3.50%, ctx=999, majf=0, minf=1 00:20:18.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.322 issued rwts: total=483,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:18.322 job2: (groupid=0, jobs=1): err= 0: pid=42986: Wed May 15 20:12:10 2024 00:20:18.322 read: IOPS=15, BW=62.7KiB/s (64.2kB/s)(64.0KiB/1021msec) 00:20:18.322 slat (nsec): min=7550, max=37077, avg=25045.06, stdev=6900.86 00:20:18.322 clat (usec): min=1188, max=43156, avg=39461.67, stdev=10211.24 00:20:18.322 lat (usec): min=1198, max=43193, avg=39486.71, stdev=10215.39 00:20:18.322 clat percentiles (usec): 00:20:18.322 | 1.00th=[ 1188], 5.00th=[ 1188], 10.00th=[41681], 20.00th=[41681], 00:20:18.322 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:20:18.322 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:20:18.322 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:20:18.322 | 99.99th=[43254] 00:20:18.322 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:20:18.322 slat (nsec): min=9883, max=63586, avg=31110.07, stdev=9877.62 00:20:18.322 clat (usec): min=412, max=1051, avg=713.46, stdev=115.92 00:20:18.322 lat (usec): min=432, max=1085, avg=744.57, stdev=119.62 00:20:18.322 clat percentiles (usec): 00:20:18.322 | 1.00th=[ 465], 5.00th=[ 510], 10.00th=[ 570], 20.00th=[ 611], 00:20:18.322 | 30.00th=[ 652], 40.00th=[ 693], 50.00th=[ 717], 60.00th=[ 742], 00:20:18.322 | 70.00th=[ 775], 80.00th=[ 816], 90.00th=[ 873], 95.00th=[ 906], 00:20:18.322 | 99.00th=[ 971], 99.50th=[ 988], 99.90th=[ 1057], 99.95th=[ 1057], 00:20:18.322 | 99.99th=[ 1057] 00:20:18.322 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:20:18.322 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:18.322 lat (usec) : 500=3.60%, 750=57.01%, 1000=35.98% 00:20:18.322 lat (msec) : 2=0.57%, 50=2.84% 00:20:18.322 cpu : usr=0.98%, sys=2.06%, ctx=529, majf=0, minf=1 00:20:18.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.322 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:18.323 job3: (groupid=0, jobs=1): err= 0: pid=42987: Wed May 15 20:12:10 2024 00:20:18.323 read: IOPS=502, BW=2010KiB/s (2058kB/s)(2012KiB/1001msec) 00:20:18.323 slat (nsec): min=7910, max=59489, avg=26934.76, stdev=3078.62 00:20:18.323 clat (usec): min=609, max=1406, avg=1125.80, stdev=92.54 00:20:18.323 lat (usec): min=622, max=1434, avg=1152.73, stdev=93.22 00:20:18.323 clat percentiles (usec): 00:20:18.323 | 1.00th=[ 848], 5.00th=[ 947], 10.00th=[ 1004], 20.00th=[ 1074], 00:20:18.323 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:20:18.323 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1237], 00:20:18.323 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[ 1401], 99.95th=[ 1401], 00:20:18.323 | 99.99th=[ 1401] 00:20:18.323 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:20:18.323 slat (nsec): min=9933, max=57628, avg=32788.02, stdev=8757.83 00:20:18.323 clat (usec): min=403, max=1147, avg=764.25, stdev=107.06 00:20:18.323 lat (usec): min=413, max=1182, avg=797.03, stdev=109.93 00:20:18.323 clat percentiles (usec): 00:20:18.323 | 1.00th=[ 482], 5.00th=[ 586], 10.00th=[ 635], 20.00th=[ 685], 00:20:18.323 | 30.00th=[ 709], 40.00th=[ 734], 50.00th=[ 766], 60.00th=[ 799], 00:20:18.323 | 70.00th=[ 824], 80.00th=[ 857], 90.00th=[ 889], 95.00th=[ 930], 00:20:18.323 | 99.00th=[ 996], 99.50th=[ 1020], 99.90th=[ 1156], 99.95th=[ 1156], 00:20:18.323 | 99.99th=[ 1156] 00:20:18.323 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:20:18.323 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:18.323 lat (usec) : 500=0.89%, 750=21.67%, 1000=32.32% 00:20:18.323 lat (msec) : 2=45.12% 00:20:18.323 cpu : usr=1.70%, sys=4.60%, ctx=1016, majf=0, minf=1 00:20:18.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.323 issued rwts: total=503,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:18.323 00:20:18.323 Run status group 0 (all jobs): 00:20:18.323 READ: bw=3977KiB/s (4072kB/s), 62.7KiB/s-2010KiB/s (64.2kB/s-2058kB/s), io=4080KiB (4178kB), run=1001-1026msec 00:20:18.323 WRITE: bw=7984KiB/s (8176kB/s), 1996KiB/s-2046KiB/s (2044kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1026msec 00:20:18.323 00:20:18.323 Disk stats (read/write): 00:20:18.323 nvme0n1: ios=34/512, merge=0/0, ticks=1265/341, in_queue=1606, util=83.77% 00:20:18.323 nvme0n2: ios=403/512, merge=0/0, ticks=672/347, in_queue=1019, util=88.16% 00:20:18.323 nvme0n3: ios=68/512, merge=0/0, ticks=1377/306, in_queue=1683, util=91.75% 00:20:18.323 nvme0n4: ios=426/512, merge=0/0, ticks=511/319, in_queue=830, util=97.33% 00:20:18.323 20:12:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:18.323 [global] 00:20:18.323 thread=1 00:20:18.323 invalidate=1 00:20:18.323 rw=randwrite 00:20:18.323 time_based=1 00:20:18.323 runtime=1 00:20:18.323 ioengine=libaio 00:20:18.323 direct=1 00:20:18.323 bs=4096 00:20:18.323 iodepth=1 00:20:18.323 norandommap=0 00:20:18.323 numjobs=1 00:20:18.323 00:20:18.323 verify_dump=1 00:20:18.323 verify_backlog=512 00:20:18.323 verify_state_save=0 00:20:18.323 do_verify=1 00:20:18.323 verify=crc32c-intel 00:20:18.323 [job0] 00:20:18.323 filename=/dev/nvme0n1 00:20:18.323 [job1] 00:20:18.323 filename=/dev/nvme0n2 00:20:18.323 [job2] 00:20:18.323 filename=/dev/nvme0n3 00:20:18.323 [job3] 00:20:18.323 filename=/dev/nvme0n4 00:20:18.323 Could not set queue depth (nvme0n1) 00:20:18.323 Could not set queue depth (nvme0n2) 00:20:18.323 Could not set queue depth (nvme0n3) 00:20:18.323 Could not set queue depth (nvme0n4) 00:20:18.584 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:18.584 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:18.584 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:18.584 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:18.584 fio-3.35 00:20:18.584 Starting 4 threads 00:20:19.994 00:20:19.994 job0: (groupid=0, jobs=1): err= 0: pid=43552: Wed May 15 20:12:12 2024 00:20:19.994 read: IOPS=14, BW=58.5KiB/s (59.9kB/s)(60.0KiB/1026msec) 00:20:19.994 slat (nsec): min=23828, max=24700, avg=24085.67, stdev=256.88 00:20:19.994 clat (usec): min=41831, max=42246, avg=41984.31, stdev=116.15 00:20:19.994 lat (usec): min=41855, max=42270, avg=42008.39, stdev=116.07 00:20:19.994 clat percentiles (usec): 00:20:19.994 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:20:19.994 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:20:19.994 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:19.994 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:19.994 | 99.99th=[42206] 00:20:19.994 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:20:19.994 slat (nsec): min=9026, max=67421, avg=27246.23, stdev=8416.45 00:20:19.994 clat (usec): min=327, max=967, avg=737.76, stdev=104.49 00:20:19.994 lat (usec): min=336, max=996, avg=765.00, stdev=107.81 00:20:19.994 clat percentiles (usec): 00:20:19.994 | 1.00th=[ 457], 5.00th=[ 553], 10.00th=[ 594], 20.00th=[ 660], 00:20:19.994 | 30.00th=[ 693], 40.00th=[ 717], 50.00th=[ 742], 60.00th=[ 775], 00:20:19.994 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 889], 00:20:19.994 | 99.00th=[ 955], 99.50th=[ 955], 99.90th=[ 971], 99.95th=[ 971], 00:20:19.994 | 99.99th=[ 971] 00:20:19.994 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:20:19.994 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:19.994 lat (usec) : 500=2.85%, 750=47.25%, 1000=47.06% 00:20:19.994 lat (msec) : 50=2.85% 00:20:19.994 cpu : usr=0.68%, sys=1.37%, ctx=528, majf=0, minf=1 00:20:19.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.994 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:19.994 job1: (groupid=0, jobs=1): err= 0: pid=43553: Wed May 15 20:12:12 2024 00:20:19.994 read: IOPS=14, BW=57.7KiB/s (59.1kB/s)(60.0KiB/1039msec) 00:20:19.994 slat (nsec): min=24265, max=25747, avg=24809.60, stdev=435.93 00:20:19.994 clat (usec): min=41205, max=42038, avg=41910.65, stdev=202.81 00:20:19.994 lat (usec): min=41229, max=42064, avg=41935.46, stdev=202.90 00:20:19.994 clat percentiles (usec): 00:20:19.994 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:20:19.994 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:20:19.994 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:19.994 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:19.994 | 99.99th=[42206] 00:20:19.994 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:20:19.994 slat (nsec): min=9838, max=51896, avg=29631.74, stdev=8330.70 00:20:19.994 clat (usec): min=506, max=1221, avg=761.71, stdev=89.40 00:20:19.994 lat (usec): min=527, max=1257, avg=791.34, stdev=91.73 00:20:19.994 clat percentiles (usec): 00:20:19.994 | 1.00th=[ 553], 5.00th=[ 603], 10.00th=[ 652], 20.00th=[ 693], 00:20:19.994 | 30.00th=[ 717], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 791], 00:20:19.994 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 898], 00:20:19.994 | 99.00th=[ 979], 99.50th=[ 1045], 99.90th=[ 1221], 99.95th=[ 1221], 00:20:19.994 | 99.99th=[ 1221] 00:20:19.994 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:20:19.994 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:19.994 lat (usec) : 750=42.88%, 1000=53.32% 00:20:19.994 lat (msec) : 2=0.95%, 50=2.85% 00:20:19.994 cpu : usr=0.87%, sys=1.35%, ctx=528, majf=0, minf=1 00:20:19.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.994 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.994 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:19.994 job2: (groupid=0, jobs=1): err= 0: pid=43554: Wed May 15 20:12:12 2024 00:20:19.994 read: IOPS=496, BW=1984KiB/s (2032kB/s)(1988KiB/1002msec) 00:20:19.994 slat (nsec): min=7563, max=60659, avg=26781.25, stdev=3810.08 00:20:19.994 clat (usec): min=832, max=1408, avg=1183.40, stdev=117.87 00:20:19.994 lat (usec): min=859, max=1434, avg=1210.18, stdev=117.72 00:20:19.994 clat percentiles (usec): 00:20:19.995 | 1.00th=[ 873], 5.00th=[ 938], 10.00th=[ 996], 20.00th=[ 1074], 00:20:19.995 | 30.00th=[ 1139], 40.00th=[ 1188], 50.00th=[ 1205], 60.00th=[ 1237], 00:20:19.995 | 70.00th=[ 1254], 80.00th=[ 1287], 90.00th=[ 1319], 95.00th=[ 1336], 00:20:19.995 | 99.00th=[ 1369], 99.50th=[ 1385], 99.90th=[ 1401], 99.95th=[ 1401], 00:20:19.995 | 99.99th=[ 1401] 00:20:19.995 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:20:19.995 slat (nsec): min=8705, max=51791, avg=29571.40, stdev=8695.92 00:20:19.995 clat (usec): min=223, max=1190, avg=734.35, stdev=156.59 00:20:19.995 lat (usec): min=233, max=1222, avg=763.92, stdev=160.75 00:20:19.995 clat percentiles (usec): 00:20:19.995 | 1.00th=[ 293], 5.00th=[ 400], 10.00th=[ 515], 20.00th=[ 619], 00:20:19.995 | 30.00th=[ 676], 40.00th=[ 725], 50.00th=[ 766], 60.00th=[ 799], 00:20:19.995 | 70.00th=[ 840], 80.00th=[ 865], 90.00th=[ 898], 95.00th=[ 938], 00:20:19.995 | 99.00th=[ 979], 99.50th=[ 996], 99.90th=[ 1188], 99.95th=[ 1188], 00:20:19.995 | 99.99th=[ 1188] 00:20:19.995 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:20:19.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:19.995 lat (usec) : 250=0.40%, 500=4.36%, 750=18.04%, 1000=32.90% 00:20:19.995 lat (msec) : 2=44.30% 00:20:19.995 cpu : usr=2.30%, sys=3.80%, ctx=1009, majf=0, minf=1 00:20:19.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.995 issued rwts: total=497,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:19.995 job3: (groupid=0, jobs=1): err= 0: pid=43555: Wed May 15 20:12:12 2024 00:20:19.995 read: IOPS=655, BW=2621KiB/s (2684kB/s)(2624KiB/1001msec) 00:20:19.995 slat (nsec): min=6595, max=62494, avg=28762.08, stdev=8432.41 00:20:19.995 clat (usec): min=393, max=993, avg=787.62, stdev=75.73 00:20:19.995 lat (usec): min=423, max=1025, avg=816.38, stdev=76.81 00:20:19.995 clat percentiles (usec): 00:20:19.995 | 1.00th=[ 494], 5.00th=[ 685], 10.00th=[ 717], 20.00th=[ 742], 00:20:19.995 | 30.00th=[ 766], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 807], 00:20:19.995 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 865], 95.00th=[ 906], 00:20:19.995 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 996], 99.95th=[ 996], 00:20:19.995 | 99.99th=[ 996] 00:20:19.995 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:20:19.995 slat (nsec): min=8756, max=86397, avg=34072.14, stdev=11034.83 00:20:19.995 clat (usec): min=197, max=686, avg=405.30, stdev=54.55 00:20:19.995 lat (usec): min=223, max=726, avg=439.37, stdev=56.55 00:20:19.995 clat percentiles (usec): 00:20:19.995 | 1.00th=[ 297], 5.00th=[ 326], 10.00th=[ 338], 20.00th=[ 359], 00:20:19.995 | 30.00th=[ 371], 40.00th=[ 388], 50.00th=[ 404], 60.00th=[ 420], 00:20:19.995 | 70.00th=[ 437], 80.00th=[ 453], 90.00th=[ 469], 95.00th=[ 490], 00:20:19.995 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 603], 99.95th=[ 685], 00:20:19.995 | 99.99th=[ 685] 00:20:19.995 bw ( KiB/s): min= 4096, max= 4096, per=41.56%, avg=4096.00, stdev= 0.00, samples=1 00:20:19.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:19.995 lat (usec) : 250=0.24%, 500=58.87%, 750=10.83%, 1000=30.06% 00:20:19.995 cpu : usr=3.90%, sys=7.20%, ctx=1681, majf=0, minf=1 00:20:19.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.995 issued rwts: total=656,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:19.995 00:20:19.995 Run status group 0 (all jobs): 00:20:19.995 READ: bw=4554KiB/s (4664kB/s), 57.7KiB/s-2621KiB/s (59.1kB/s-2684kB/s), io=4732KiB (4846kB), run=1001-1039msec 00:20:19.995 WRITE: bw=9856KiB/s (10.1MB/s), 1971KiB/s-4092KiB/s (2018kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1039msec 00:20:19.995 00:20:19.995 Disk stats (read/write): 00:20:19.995 nvme0n1: ios=60/512, merge=0/0, ticks=486/358, in_queue=844, util=88.38% 00:20:19.995 nvme0n2: ios=41/512, merge=0/0, ticks=1297/363, in_queue=1660, util=97.96% 00:20:19.995 nvme0n3: ios=381/512, merge=0/0, ticks=877/304, in_queue=1181, util=91.04% 00:20:19.995 nvme0n4: ios=555/913, merge=0/0, ticks=560/308, in_queue=868, util=99.15% 00:20:19.995 20:12:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:19.995 [global] 00:20:19.995 thread=1 00:20:19.995 invalidate=1 00:20:19.995 rw=write 00:20:19.995 time_based=1 00:20:19.995 runtime=1 00:20:19.995 ioengine=libaio 00:20:19.995 direct=1 00:20:19.995 bs=4096 00:20:19.995 iodepth=128 00:20:19.995 norandommap=0 00:20:19.995 numjobs=1 00:20:19.995 00:20:19.995 verify_dump=1 00:20:19.995 verify_backlog=512 00:20:19.995 verify_state_save=0 00:20:19.995 do_verify=1 00:20:19.995 verify=crc32c-intel 00:20:19.995 [job0] 00:20:19.995 filename=/dev/nvme0n1 00:20:19.995 [job1] 00:20:19.995 filename=/dev/nvme0n2 00:20:19.995 [job2] 00:20:19.995 filename=/dev/nvme0n3 00:20:19.995 [job3] 00:20:19.995 filename=/dev/nvme0n4 00:20:19.995 Could not set queue depth (nvme0n1) 00:20:19.995 Could not set queue depth (nvme0n2) 00:20:19.995 Could not set queue depth (nvme0n3) 00:20:19.995 Could not set queue depth (nvme0n4) 00:20:20.263 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:20.263 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:20.264 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:20.264 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:20.264 fio-3.35 00:20:20.264 Starting 4 threads 00:20:21.661 00:20:21.661 job0: (groupid=0, jobs=1): err= 0: pid=44055: Wed May 15 20:12:13 2024 00:20:21.661 read: IOPS=6672, BW=26.1MiB/s (27.3MB/s)(26.2MiB/1004msec) 00:20:21.661 slat (nsec): min=1376, max=16447k, avg=73436.53, stdev=559925.80 00:20:21.661 clat (usec): min=1402, max=30035, avg=9678.88, stdev=4412.68 00:20:21.661 lat (usec): min=3044, max=30062, avg=9752.31, stdev=4441.74 00:20:21.661 clat percentiles (usec): 00:20:21.661 | 1.00th=[ 3752], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6194], 00:20:21.661 | 30.00th=[ 6718], 40.00th=[ 7373], 50.00th=[ 8356], 60.00th=[ 9503], 00:20:21.661 | 70.00th=[11338], 80.00th=[13173], 90.00th=[15926], 95.00th=[18744], 00:20:21.661 | 99.00th=[26608], 99.50th=[26608], 99.90th=[26608], 99.95th=[26608], 00:20:21.661 | 99.99th=[30016] 00:20:21.661 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:20:21.661 slat (usec): min=2, max=9768, avg=66.15, stdev=468.64 00:20:21.661 clat (usec): min=1372, max=49604, avg=8719.20, stdev=6608.03 00:20:21.661 lat (usec): min=1914, max=49612, avg=8785.35, stdev=6650.86 00:20:21.661 clat percentiles (usec): 00:20:21.661 | 1.00th=[ 2474], 5.00th=[ 3425], 10.00th=[ 4228], 20.00th=[ 5342], 00:20:21.661 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6718], 60.00th=[ 7504], 00:20:21.661 | 70.00th=[ 9110], 80.00th=[10683], 90.00th=[14091], 95.00th=[18482], 00:20:21.661 | 99.00th=[42206], 99.50th=[44303], 99.90th=[48497], 99.95th=[49546], 00:20:21.661 | 99.99th=[49546] 00:20:21.661 bw ( KiB/s): min=28264, max=28400, per=33.88%, avg=28332.00, stdev=96.17, samples=2 00:20:21.661 iops : min= 7066, max= 7100, avg=7083.00, stdev=24.04, samples=2 00:20:21.661 lat (msec) : 2=0.11%, 4=4.95%, 10=65.49%, 20=25.93%, 50=3.53% 00:20:21.661 cpu : usr=6.08%, sys=5.68%, ctx=528, majf=0, minf=1 00:20:21.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:21.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:21.661 issued rwts: total=6699,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.661 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:21.661 job1: (groupid=0, jobs=1): err= 0: pid=44067: Wed May 15 20:12:13 2024 00:20:21.661 read: IOPS=4393, BW=17.2MiB/s (18.0MB/s)(17.2MiB/1005msec) 00:20:21.661 slat (nsec): min=1357, max=28766k, avg=127438.57, stdev=1051290.37 00:20:21.661 clat (usec): min=4107, max=48621, avg=16391.76, stdev=7681.39 00:20:21.661 lat (usec): min=5805, max=48623, avg=16519.19, stdev=7741.99 00:20:21.661 clat percentiles (usec): 00:20:21.661 | 1.00th=[ 7439], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10421], 00:20:21.661 | 30.00th=[11338], 40.00th=[12387], 50.00th=[13829], 60.00th=[15270], 00:20:21.661 | 70.00th=[17957], 80.00th=[22414], 90.00th=[28181], 95.00th=[32113], 00:20:21.661 | 99.00th=[40109], 99.50th=[42730], 99.90th=[48497], 99.95th=[48497], 00:20:21.661 | 99.99th=[48497] 00:20:21.661 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:20:21.661 slat (usec): min=2, max=8905, avg=87.31, stdev=566.04 00:20:21.661 clat (usec): min=729, max=48623, avg=11948.89, stdev=7708.49 00:20:21.661 lat (usec): min=745, max=48632, avg=12036.20, stdev=7752.98 00:20:21.661 clat percentiles (usec): 00:20:21.661 | 1.00th=[ 3720], 5.00th=[ 5997], 10.00th=[ 6259], 20.00th=[ 7439], 00:20:21.661 | 30.00th=[ 8356], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10421], 00:20:21.661 | 70.00th=[11076], 80.00th=[12649], 90.00th=[23200], 95.00th=[30802], 00:20:21.661 | 99.00th=[42730], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:20:21.661 | 99.99th=[48497] 00:20:21.661 bw ( KiB/s): min=16392, max=20472, per=22.04%, avg=18432.00, stdev=2885.00, samples=2 00:20:21.661 iops : min= 4098, max= 5118, avg=4608.00, stdev=721.25, samples=2 00:20:21.661 lat (usec) : 750=0.02%, 1000=0.01% 00:20:21.661 lat (msec) : 2=0.08%, 4=0.53%, 10=34.12%, 20=47.97%, 50=17.27% 00:20:21.661 cpu : usr=4.28%, sys=4.78%, ctx=280, majf=0, minf=1 00:20:21.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:21.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:21.661 issued rwts: total=4415,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.661 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:21.661 job2: (groupid=0, jobs=1): err= 0: pid=44077: Wed May 15 20:12:13 2024 00:20:21.661 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:20:21.661 slat (nsec): min=1369, max=25920k, avg=146899.26, stdev=1132710.33 00:20:21.661 clat (usec): min=6906, max=75724, avg=19158.18, stdev=13797.50 00:20:21.661 lat (usec): min=6908, max=77791, avg=19305.08, stdev=13923.11 00:20:21.661 clat percentiles (usec): 00:20:21.661 | 1.00th=[ 7373], 5.00th=[ 7898], 10.00th=[ 9241], 20.00th=[10421], 00:20:21.661 | 30.00th=[11600], 40.00th=[12518], 50.00th=[13173], 60.00th=[13698], 00:20:21.661 | 70.00th=[16188], 80.00th=[34341], 90.00th=[38536], 95.00th=[47449], 00:20:21.661 | 99.00th=[69731], 99.50th=[69731], 99.90th=[76022], 99.95th=[76022], 00:20:21.661 | 99.99th=[76022] 00:20:21.661 write: IOPS=3661, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1007msec); 0 zone resets 00:20:21.661 slat (usec): min=2, max=22373, avg=123.05, stdev=900.79 00:20:21.661 clat (usec): min=3004, max=80295, avg=15936.13, stdev=13680.96 00:20:21.661 lat (usec): min=3011, max=80317, avg=16059.18, stdev=13787.36 00:20:21.661 clat percentiles (usec): 00:20:21.661 | 1.00th=[ 4883], 5.00th=[ 6652], 10.00th=[ 6980], 20.00th=[ 8717], 00:20:21.661 | 30.00th=[ 9765], 40.00th=[10683], 50.00th=[11338], 60.00th=[11863], 00:20:21.661 | 70.00th=[12387], 80.00th=[15533], 90.00th=[40109], 95.00th=[50594], 00:20:21.661 | 99.00th=[72877], 99.50th=[74974], 99.90th=[80217], 99.95th=[80217], 00:20:21.661 | 99.99th=[80217] 00:20:21.661 bw ( KiB/s): min=12288, max=16384, per=17.14%, avg=14336.00, stdev=2896.31, samples=2 00:20:21.661 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:20:21.661 lat (msec) : 4=0.32%, 10=23.22%, 20=55.05%, 50=16.41%, 100=5.01% 00:20:21.661 cpu : usr=2.39%, sys=4.37%, ctx=276, majf=0, minf=1 00:20:21.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:20:21.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:21.661 issued rwts: total=3584,3687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.661 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:21.661 job3: (groupid=0, jobs=1): err= 0: pid=44078: Wed May 15 20:12:13 2024 00:20:21.661 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:20:21.661 slat (nsec): min=1406, max=12306k, avg=77594.64, stdev=647547.56 00:20:21.661 clat (usec): min=2943, max=31332, avg=12063.05, stdev=4548.82 00:20:21.661 lat (usec): min=2950, max=33440, avg=12140.64, stdev=4574.85 00:20:21.661 clat percentiles (usec): 00:20:21.661 | 1.00th=[ 4047], 5.00th=[ 6128], 10.00th=[ 6980], 20.00th=[ 9110], 00:20:21.661 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11076], 60.00th=[12125], 00:20:21.661 | 70.00th=[13173], 80.00th=[13960], 90.00th=[17433], 95.00th=[19792], 00:20:21.661 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:20:21.661 | 99.99th=[31327] 00:20:21.661 write: IOPS=5565, BW=21.7MiB/s (22.8MB/s)(21.8MiB/1004msec); 0 zone resets 00:20:21.661 slat (usec): min=2, max=23612, avg=85.17, stdev=757.18 00:20:21.661 clat (usec): min=997, max=57239, avg=11310.45, stdev=6959.49 00:20:21.661 lat (usec): min=1006, max=57247, avg=11395.62, stdev=7017.55 00:20:21.661 clat percentiles (usec): 00:20:21.662 | 1.00th=[ 2933], 5.00th=[ 4621], 10.00th=[ 5014], 20.00th=[ 6456], 00:20:21.662 | 30.00th=[ 7242], 40.00th=[ 8979], 50.00th=[10814], 60.00th=[11600], 00:20:21.662 | 70.00th=[12518], 80.00th=[13960], 90.00th=[17957], 95.00th=[20579], 00:20:21.662 | 99.00th=[47973], 99.50th=[55313], 99.90th=[56886], 99.95th=[57410], 00:20:21.662 | 99.99th=[57410] 00:20:21.662 bw ( KiB/s): min=18504, max=25184, per=26.12%, avg=21844.00, stdev=4723.47, samples=2 00:20:21.662 iops : min= 4626, max= 6296, avg=5461.00, stdev=1180.87, samples=2 00:20:21.662 lat (usec) : 1000=0.03% 00:20:21.662 lat (msec) : 2=0.08%, 4=1.88%, 10=34.97%, 20=57.58%, 50=5.16% 00:20:21.662 lat (msec) : 100=0.29% 00:20:21.662 cpu : usr=4.49%, sys=5.78%, ctx=382, majf=0, minf=1 00:20:21.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:21.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:21.662 issued rwts: total=5120,5588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.662 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:21.662 00:20:21.662 Run status group 0 (all jobs): 00:20:21.662 READ: bw=76.9MiB/s (80.6MB/s), 13.9MiB/s-26.1MiB/s (14.6MB/s-27.3MB/s), io=77.4MiB (81.2MB), run=1004-1007msec 00:20:21.662 WRITE: bw=81.7MiB/s (85.6MB/s), 14.3MiB/s-27.9MiB/s (15.0MB/s-29.2MB/s), io=82.2MiB (86.2MB), run=1004-1007msec 00:20:21.662 00:20:21.662 Disk stats (read/write): 00:20:21.662 nvme0n1: ios=5677/5958, merge=0/0, ticks=52404/46647, in_queue=99051, util=87.17% 00:20:21.662 nvme0n2: ios=3629/3935, merge=0/0, ticks=57262/42016, in_queue=99278, util=91.34% 00:20:21.662 nvme0n3: ios=3124/3326, merge=0/0, ticks=27644/28102, in_queue=55746, util=95.36% 00:20:21.662 nvme0n4: ios=4145/4371, merge=0/0, ticks=44828/46695, in_queue=91523, util=97.23% 00:20:21.662 20:12:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:21.662 [global] 00:20:21.662 thread=1 00:20:21.662 invalidate=1 00:20:21.662 rw=randwrite 00:20:21.662 time_based=1 00:20:21.662 runtime=1 00:20:21.662 ioengine=libaio 00:20:21.662 direct=1 00:20:21.662 bs=4096 00:20:21.662 iodepth=128 00:20:21.662 norandommap=0 00:20:21.662 numjobs=1 00:20:21.662 00:20:21.662 verify_dump=1 00:20:21.662 verify_backlog=512 00:20:21.662 verify_state_save=0 00:20:21.662 do_verify=1 00:20:21.662 verify=crc32c-intel 00:20:21.662 [job0] 00:20:21.662 filename=/dev/nvme0n1 00:20:21.662 [job1] 00:20:21.662 filename=/dev/nvme0n2 00:20:21.662 [job2] 00:20:21.662 filename=/dev/nvme0n3 00:20:21.662 [job3] 00:20:21.662 filename=/dev/nvme0n4 00:20:21.662 Could not set queue depth (nvme0n1) 00:20:21.662 Could not set queue depth (nvme0n2) 00:20:21.662 Could not set queue depth (nvme0n3) 00:20:21.662 Could not set queue depth (nvme0n4) 00:20:21.923 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:21.923 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:21.923 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:21.923 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:21.923 fio-3.35 00:20:21.923 Starting 4 threads 00:20:23.422 00:20:23.422 job0: (groupid=0, jobs=1): err= 0: pid=44493: Wed May 15 20:12:15 2024 00:20:23.422 read: IOPS=1516, BW=6065KiB/s (6211kB/s)(6144KiB/1013msec) 00:20:23.422 slat (usec): min=3, max=20249, avg=237.36, stdev=1463.41 00:20:23.422 clat (usec): min=4234, max=96068, avg=30372.27, stdev=21082.93 00:20:23.422 lat (usec): min=4239, max=96093, avg=30609.64, stdev=21273.36 00:20:23.422 clat percentiles (usec): 00:20:23.422 | 1.00th=[ 7308], 5.00th=[ 7767], 10.00th=[ 9503], 20.00th=[10814], 00:20:23.422 | 30.00th=[14222], 40.00th=[22414], 50.00th=[22414], 60.00th=[26346], 00:20:23.422 | 70.00th=[36963], 80.00th=[48497], 90.00th=[66323], 95.00th=[72877], 00:20:23.422 | 99.00th=[85459], 99.50th=[85459], 99.90th=[85459], 99.95th=[95945], 00:20:23.422 | 99.99th=[95945] 00:20:23.422 write: IOPS=1752, BW=7009KiB/s (7177kB/s)(7100KiB/1013msec); 0 zone resets 00:20:23.422 slat (usec): min=2, max=63352, avg=327.13, stdev=2132.84 00:20:23.422 clat (usec): min=1053, max=167222, avg=43350.51, stdev=42705.77 00:20:23.422 lat (usec): min=1062, max=178783, avg=43677.64, stdev=43006.30 00:20:23.422 clat percentiles (msec): 00:20:23.422 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:20:23.422 | 30.00th=[ 10], 40.00th=[ 17], 50.00th=[ 22], 60.00th=[ 41], 00:20:23.422 | 70.00th=[ 62], 80.00th=[ 79], 90.00th=[ 124], 95.00th=[ 142], 00:20:23.422 | 99.00th=[ 150], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 167], 00:20:23.422 | 99.99th=[ 167] 00:20:23.422 bw ( KiB/s): min= 4096, max= 9080, per=13.62%, avg=6588.00, stdev=3524.22, samples=2 00:20:23.422 iops : min= 1024, max= 2270, avg=1647.00, stdev=881.06, samples=2 00:20:23.422 lat (msec) : 2=0.30%, 4=0.03%, 10=22.62%, 20=17.67%, 50=31.53% 00:20:23.422 lat (msec) : 100=21.69%, 250=6.16% 00:20:23.422 cpu : usr=1.48%, sys=2.08%, ctx=173, majf=0, minf=2 00:20:23.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:20:23.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:23.422 issued rwts: total=1536,1775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.422 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:23.422 job1: (groupid=0, jobs=1): err= 0: pid=44509: Wed May 15 20:12:15 2024 00:20:23.422 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:20:23.422 slat (nsec): min=1375, max=48220k, avg=177936.06, stdev=1529118.69 00:20:23.422 clat (usec): min=3659, max=78898, avg=25848.37, stdev=20123.90 00:20:23.422 lat (usec): min=3667, max=78924, avg=26026.31, stdev=20278.69 00:20:23.422 clat percentiles (usec): 00:20:23.422 | 1.00th=[ 3687], 5.00th=[ 4359], 10.00th=[ 5145], 20.00th=[ 8848], 00:20:23.422 | 30.00th=[ 9896], 40.00th=[13304], 50.00th=[19530], 60.00th=[23725], 00:20:23.422 | 70.00th=[33817], 80.00th=[47449], 90.00th=[61604], 95.00th=[63701], 00:20:23.422 | 99.00th=[69731], 99.50th=[73925], 99.90th=[76022], 99.95th=[78119], 00:20:23.422 | 99.99th=[79168] 00:20:23.422 write: IOPS=3899, BW=15.2MiB/s (16.0MB/s)(15.4MiB/1014msec); 0 zone resets 00:20:23.422 slat (usec): min=2, max=19240, avg=72.31, stdev=614.47 00:20:23.422 clat (usec): min=1048, max=77936, avg=12386.30, stdev=12370.19 00:20:23.422 lat (usec): min=1056, max=77944, avg=12458.61, stdev=12447.92 00:20:23.422 clat percentiles (usec): 00:20:23.422 | 1.00th=[ 2311], 5.00th=[ 3818], 10.00th=[ 4293], 20.00th=[ 4555], 00:20:23.422 | 30.00th=[ 5407], 40.00th=[ 6194], 50.00th=[ 7046], 60.00th=[ 7832], 00:20:23.422 | 70.00th=[ 9765], 80.00th=[20579], 90.00th=[30016], 95.00th=[37487], 00:20:23.422 | 99.00th=[61080], 99.50th=[71828], 99.90th=[72877], 99.95th=[78119], 00:20:23.423 | 99.99th=[78119] 00:20:23.423 bw ( KiB/s): min=10128, max=20480, per=31.63%, avg=15304.00, stdev=7319.97, samples=2 00:20:23.423 iops : min= 2532, max= 5120, avg=3826.00, stdev=1829.99, samples=2 00:20:23.423 lat (msec) : 2=0.20%, 4=4.74%, 10=48.49%, 20=14.15%, 50=23.36% 00:20:23.423 lat (msec) : 100=9.07% 00:20:23.423 cpu : usr=3.06%, sys=4.44%, ctx=276, majf=0, minf=1 00:20:23.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:20:23.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:23.423 issued rwts: total=3072,3954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:23.423 job2: (groupid=0, jobs=1): err= 0: pid=44528: Wed May 15 20:12:15 2024 00:20:23.423 read: IOPS=2535, BW=9.90MiB/s (10.4MB/s)(10.1MiB/1022msec) 00:20:23.423 slat (nsec): min=1327, max=31897k, avg=159046.54, stdev=1363842.17 00:20:23.423 clat (msec): min=6, max=102, avg=22.48, stdev=18.56 00:20:23.423 lat (msec): min=6, max=106, avg=22.64, stdev=18.66 00:20:23.423 clat percentiles (msec): 00:20:23.423 | 1.00th=[ 7], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:20:23.423 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 20], 00:20:23.423 | 70.00th=[ 22], 80.00th=[ 26], 90.00th=[ 33], 95.00th=[ 73], 00:20:23.423 | 99.00th=[ 99], 99.50th=[ 102], 99.90th=[ 103], 99.95th=[ 103], 00:20:23.423 | 99.99th=[ 103] 00:20:23.423 write: IOPS=3005, BW=11.7MiB/s (12.3MB/s)(12.0MiB/1022msec); 0 zone resets 00:20:23.423 slat (usec): min=2, max=13566, avg=166.35, stdev=916.93 00:20:23.423 clat (usec): min=1749, max=88185, avg=23240.29, stdev=21488.72 00:20:23.423 lat (usec): min=1760, max=88193, avg=23406.64, stdev=21633.79 00:20:23.423 clat percentiles (usec): 00:20:23.423 | 1.00th=[ 5342], 5.00th=[ 6063], 10.00th=[ 6915], 20.00th=[ 8586], 00:20:23.423 | 30.00th=[10814], 40.00th=[12387], 50.00th=[14877], 60.00th=[17695], 00:20:23.423 | 70.00th=[25297], 80.00th=[30802], 90.00th=[55313], 95.00th=[82314], 00:20:23.423 | 99.00th=[86508], 99.50th=[86508], 99.90th=[87557], 99.95th=[87557], 00:20:23.423 | 99.99th=[88605] 00:20:23.423 bw ( KiB/s): min= 6088, max=17712, per=24.59%, avg=11900.00, stdev=8219.41, samples=2 00:20:23.423 iops : min= 1522, max= 4428, avg=2975.00, stdev=2054.85, samples=2 00:20:23.423 lat (msec) : 2=0.18%, 10=17.46%, 20=48.47%, 50=24.58%, 100=8.99% 00:20:23.423 lat (msec) : 250=0.32% 00:20:23.423 cpu : usr=2.25%, sys=3.33%, ctx=219, majf=0, minf=1 00:20:23.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:20:23.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:23.423 issued rwts: total=2591,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:23.423 job3: (groupid=0, jobs=1): err= 0: pid=44535: Wed May 15 20:12:15 2024 00:20:23.423 read: IOPS=3002, BW=11.7MiB/s (12.3MB/s)(12.0MiB/1023msec) 00:20:23.423 slat (nsec): min=1425, max=26838k, avg=125430.84, stdev=1020058.53 00:20:23.423 clat (usec): min=7723, max=52766, avg=15831.01, stdev=7283.85 00:20:23.423 lat (usec): min=7730, max=52793, avg=15956.44, stdev=7366.14 00:20:23.423 clat percentiles (usec): 00:20:23.423 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:20:23.423 | 30.00th=[10552], 40.00th=[11338], 50.00th=[13698], 60.00th=[15270], 00:20:23.423 | 70.00th=[17433], 80.00th=[21365], 90.00th=[26084], 95.00th=[32900], 00:20:23.423 | 99.00th=[38011], 99.50th=[38011], 99.90th=[40633], 99.95th=[49546], 00:20:23.423 | 99.99th=[52691] 00:20:23.423 write: IOPS=3493, BW=13.6MiB/s (14.3MB/s)(14.0MiB/1023msec); 0 zone resets 00:20:23.423 slat (usec): min=2, max=37906, avg=167.28, stdev=1139.36 00:20:23.423 clat (usec): min=3670, max=96452, avg=22645.20, stdev=21702.23 00:20:23.423 lat (usec): min=3679, max=96460, avg=22812.48, stdev=21857.83 00:20:23.423 clat percentiles (usec): 00:20:23.423 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 7635], 00:20:23.423 | 30.00th=[ 8979], 40.00th=[11338], 50.00th=[13566], 60.00th=[17957], 00:20:23.423 | 70.00th=[22938], 80.00th=[32113], 90.00th=[50070], 95.00th=[83362], 00:20:23.423 | 99.00th=[92799], 99.50th=[94897], 99.90th=[95945], 99.95th=[95945], 00:20:23.423 | 99.99th=[95945] 00:20:23.423 bw ( KiB/s): min= 8192, max=19376, per=28.49%, avg=13784.00, stdev=7908.28, samples=2 00:20:23.423 iops : min= 2048, max= 4844, avg=3446.00, stdev=1977.07, samples=2 00:20:23.423 lat (msec) : 4=0.20%, 10=28.48%, 20=40.08%, 50=25.80%, 100=5.43% 00:20:23.423 cpu : usr=2.94%, sys=3.82%, ctx=237, majf=0, minf=1 00:20:23.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:20:23.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:23.423 issued rwts: total=3072,3574,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:23.423 00:20:23.423 Run status group 0 (all jobs): 00:20:23.423 READ: bw=39.2MiB/s (41.1MB/s), 6065KiB/s-11.8MiB/s (6211kB/s-12.4MB/s), io=40.1MiB (42.1MB), run=1013-1023msec 00:20:23.423 WRITE: bw=47.3MiB/s (49.5MB/s), 7009KiB/s-15.2MiB/s (7177kB/s-16.0MB/s), io=48.3MiB (50.7MB), run=1013-1023msec 00:20:23.423 00:20:23.423 Disk stats (read/write): 00:20:23.423 nvme0n1: ios=1104/1536, merge=0/0, ticks=13989/32855, in_queue=46844, util=87.98% 00:20:23.423 nvme0n2: ios=2898/3584, merge=0/0, ticks=40389/30435, in_queue=70824, util=95.72% 00:20:23.423 nvme0n3: ios=2302/2560, merge=0/0, ticks=42674/52986, in_queue=95660, util=88.40% 00:20:23.423 nvme0n4: ios=2614/2812, merge=0/0, ticks=41070/60210, in_queue=101280, util=95.52% 00:20:23.423 20:12:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:20:23.423 20:12:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=44636 00:20:23.423 20:12:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:20:23.423 20:12:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:23.423 [global] 00:20:23.423 thread=1 00:20:23.423 invalidate=1 00:20:23.423 rw=read 00:20:23.423 time_based=1 00:20:23.423 runtime=10 00:20:23.423 ioengine=libaio 00:20:23.423 direct=1 00:20:23.423 bs=4096 00:20:23.423 iodepth=1 00:20:23.423 norandommap=1 00:20:23.423 numjobs=1 00:20:23.423 00:20:23.423 [job0] 00:20:23.423 filename=/dev/nvme0n1 00:20:23.423 [job1] 00:20:23.423 filename=/dev/nvme0n2 00:20:23.423 [job2] 00:20:23.423 filename=/dev/nvme0n3 00:20:23.423 [job3] 00:20:23.423 filename=/dev/nvme0n4 00:20:23.423 Could not set queue depth (nvme0n1) 00:20:23.423 Could not set queue depth (nvme0n2) 00:20:23.423 Could not set queue depth (nvme0n3) 00:20:23.423 Could not set queue depth (nvme0n4) 00:20:23.688 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:23.688 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:23.688 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:23.688 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:23.688 fio-3.35 00:20:23.688 Starting 4 threads 00:20:26.236 20:12:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:26.236 20:12:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:26.236 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=270336, buflen=4096 00:20:26.236 fio: pid=45035, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:26.497 20:12:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:26.497 20:12:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:26.497 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1187840, buflen=4096 00:20:26.497 fio: pid=45028, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:26.758 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=4972544, buflen=4096 00:20:26.758 fio: pid=45000, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:26.758 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:26.758 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:27.019 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11431936, buflen=4096 00:20:27.019 fio: pid=45010, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:27.019 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:27.019 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:27.019 00:20:27.019 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=45000: Wed May 15 20:12:19 2024 00:20:27.019 read: IOPS=402, BW=1608KiB/s (1647kB/s)(4856KiB/3019msec) 00:20:27.019 slat (usec): min=6, max=14974, avg=45.42, stdev=527.75 00:20:27.019 clat (usec): min=578, max=42208, avg=2411.16, stdev=7073.40 00:20:27.019 lat (usec): min=608, max=42232, avg=2456.60, stdev=7089.77 00:20:27.019 clat percentiles (usec): 00:20:27.019 | 1.00th=[ 848], 5.00th=[ 963], 10.00th=[ 1004], 20.00th=[ 1057], 00:20:27.019 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1172], 00:20:27.019 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1287], 95.00th=[ 1336], 00:20:27.019 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:27.019 | 99.99th=[42206] 00:20:27.019 bw ( KiB/s): min= 1136, max= 2768, per=31.88%, avg=1716.80, stdev=691.05, samples=5 00:20:27.019 iops : min= 284, max= 692, avg=429.20, stdev=172.76, samples=5 00:20:27.019 lat (usec) : 750=0.33%, 1000=9.47% 00:20:27.019 lat (msec) : 2=87.00%, 50=3.13% 00:20:27.019 cpu : usr=0.30%, sys=1.29%, ctx=1218, majf=0, minf=1 00:20:27.019 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.019 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.019 issued rwts: total=1215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.019 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:27.019 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=45010: Wed May 15 20:12:19 2024 00:20:27.019 read: IOPS=861, BW=3445KiB/s (3527kB/s)(10.9MiB/3241msec) 00:20:27.019 slat (usec): min=6, max=29771, avg=59.28, stdev=829.69 00:20:27.019 clat (usec): min=720, max=6301, avg=1085.33, stdev=127.85 00:20:27.019 lat (usec): min=727, max=30905, avg=1144.63, stdev=841.82 00:20:27.019 clat percentiles (usec): 00:20:27.019 | 1.00th=[ 865], 5.00th=[ 947], 10.00th=[ 979], 20.00th=[ 1020], 00:20:27.019 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:20:27.019 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:20:27.019 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1467], 99.95th=[ 1483], 00:20:27.019 | 99.99th=[ 6325] 00:20:27.019 bw ( KiB/s): min= 3205, max= 3600, per=65.16%, avg=3507.50, stdev=149.66, samples=6 00:20:27.019 iops : min= 801, max= 900, avg=876.83, stdev=37.51, samples=6 00:20:27.019 lat (usec) : 750=0.04%, 1000=14.15% 00:20:27.019 lat (msec) : 2=85.74%, 10=0.04% 00:20:27.019 cpu : usr=1.64%, sys=3.24%, ctx=2800, majf=0, minf=1 00:20:27.019 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.019 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.019 issued rwts: total=2792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.019 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:27.019 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=45028: Wed May 15 20:12:19 2024 00:20:27.019 read: IOPS=102, BW=410KiB/s (420kB/s)(1160KiB/2826msec) 00:20:27.019 slat (nsec): min=6537, max=61343, avg=24144.31, stdev=5704.23 00:20:27.019 clat (usec): min=348, max=42061, avg=9642.83, stdev=16702.38 00:20:27.019 lat (usec): min=377, max=42087, avg=9666.97, stdev=16703.25 00:20:27.019 clat percentiles (usec): 00:20:27.019 | 1.00th=[ 506], 5.00th=[ 914], 10.00th=[ 947], 20.00th=[ 996], 00:20:27.019 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:20:27.019 | 70.00th=[ 1139], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:20:27.019 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:27.019 | 99.99th=[42206] 00:20:27.019 bw ( KiB/s): min= 96, max= 1536, per=8.34%, avg=449.60, stdev=623.70, samples=5 00:20:27.019 iops : min= 24, max= 384, avg=112.40, stdev=155.93, samples=5 00:20:27.019 lat (usec) : 500=0.69%, 750=0.34%, 1000=21.65% 00:20:27.019 lat (msec) : 2=56.01%, 50=20.96% 00:20:27.019 cpu : usr=0.14%, sys=0.25%, ctx=291, majf=0, minf=1 00:20:27.019 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.019 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.019 issued rwts: total=291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.019 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:27.019 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=45035: Wed May 15 20:12:19 2024 00:20:27.019 read: IOPS=25, BW=102KiB/s (104kB/s)(264KiB/2594msec) 00:20:27.019 slat (nsec): min=24033, max=78228, avg=25230.52, stdev=6576.69 00:20:27.019 clat (usec): min=918, max=42161, avg=38855.62, stdev=10865.16 00:20:27.019 lat (usec): min=996, max=42186, avg=38880.86, stdev=10862.26 00:20:27.019 clat percentiles (usec): 00:20:27.019 | 1.00th=[ 922], 5.00th=[ 1287], 10.00th=[41681], 20.00th=[41681], 00:20:27.019 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:20:27.019 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:27.019 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:27.019 | 99.99th=[42206] 00:20:27.019 bw ( KiB/s): min= 96, max= 112, per=1.90%, avg=102.40, stdev= 6.69, samples=5 00:20:27.019 iops : min= 24, max= 28, avg=25.60, stdev= 1.67, samples=5 00:20:27.019 lat (usec) : 1000=1.49% 00:20:27.019 lat (msec) : 2=5.97%, 50=91.04% 00:20:27.019 cpu : usr=0.00%, sys=0.12%, ctx=68, majf=0, minf=2 00:20:27.019 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.019 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.019 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.019 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:27.019 00:20:27.019 Run status group 0 (all jobs): 00:20:27.019 READ: bw=5382KiB/s (5511kB/s), 102KiB/s-3445KiB/s (104kB/s-3527kB/s), io=17.0MiB (17.9MB), run=2594-3241msec 00:20:27.019 00:20:27.019 Disk stats (read/write): 00:20:27.019 nvme0n1: ios=1161/0, merge=0/0, ticks=2690/0, in_queue=2690, util=93.99% 00:20:27.019 nvme0n2: ios=2698/0, merge=0/0, ticks=2666/0, in_queue=2666, util=93.46% 00:20:27.019 nvme0n3: ios=283/0, merge=0/0, ticks=2523/0, in_queue=2523, util=95.99% 00:20:27.019 nvme0n4: ios=67/0, merge=0/0, ticks=2576/0, in_queue=2576, util=96.01% 00:20:27.280 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:27.280 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:27.280 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:27.280 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:27.541 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:27.541 20:12:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:27.802 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:27.802 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:28.063 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:20:28.063 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 44636 00:20:28.063 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:20:28.063 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:28.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:28.063 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:28.063 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:20:28.063 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:20:28.063 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:28.063 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:20:28.063 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:28.063 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:20:28.063 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:28.063 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:28.063 nvmf hotplug test: fio failed as expected 00:20:28.063 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:28.325 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:28.325 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:28.325 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:28.325 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:28.325 20:12:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:20:28.325 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:28.325 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:20:28.325 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:28.325 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:20:28.325 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:28.325 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:28.325 rmmod nvme_tcp 00:20:28.325 rmmod nvme_fabrics 00:20:28.325 rmmod nvme_keyring 00:20:28.325 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:28.593 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:20:28.593 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:20:28.593 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 40585 ']' 00:20:28.593 20:12:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 40585 00:20:28.593 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 40585 ']' 00:20:28.593 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 40585 00:20:28.593 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:20:28.593 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:28.593 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 40585 00:20:28.593 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:28.593 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:28.593 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 40585' 00:20:28.593 killing process with pid 40585 00:20:28.593 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 40585 00:20:28.593 [2024-05-15 20:12:20.889363] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:28.593 20:12:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 40585 00:20:28.593 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:28.593 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:28.593 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:28.593 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:28.593 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:28.593 20:12:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.593 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.593 20:12:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.141 20:12:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:31.141 00:20:31.141 real 0m30.937s 00:20:31.141 user 2m35.556s 00:20:31.141 sys 0m10.215s 00:20:31.141 20:12:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:31.141 20:12:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.141 ************************************ 00:20:31.141 END TEST nvmf_fio_target 00:20:31.141 ************************************ 00:20:31.141 20:12:23 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:31.141 20:12:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:31.141 20:12:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:31.141 20:12:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:31.141 ************************************ 00:20:31.141 START TEST nvmf_bdevio 00:20:31.141 ************************************ 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:31.141 * Looking for test storage... 00:20:31.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:31.141 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:31.142 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.142 20:12:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.142 20:12:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.142 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:31.142 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:31.142 20:12:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:20:31.142 20:12:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:39.285 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:39.285 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:39.285 Found net devices under 0000:31:00.0: cvl_0_0 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.285 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:39.286 Found net devices under 0000:31:00.1: cvl_0_1 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:39.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:20:39.286 00:20:39.286 --- 10.0.0.2 ping statistics --- 00:20:39.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.286 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:39.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:20:39.286 00:20:39.286 --- 10.0.0.1 ping statistics --- 00:20:39.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.286 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=50587 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 50587 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 50587 ']' 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:39.286 20:12:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:39.286 [2024-05-15 20:12:31.630214] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:20:39.286 [2024-05-15 20:12:31.630261] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.286 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.286 [2024-05-15 20:12:31.718290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:39.286 [2024-05-15 20:12:31.781499] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.286 [2024-05-15 20:12:31.781537] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.286 [2024-05-15 20:12:31.781545] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.286 [2024-05-15 20:12:31.781551] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.286 [2024-05-15 20:12:31.781557] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.286 [2024-05-15 20:12:31.781698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:39.286 [2024-05-15 20:12:31.781855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:39.286 [2024-05-15 20:12:31.782007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.286 [2024-05-15 20:12:31.782008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:40.228 [2024-05-15 20:12:32.552184] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:40.228 Malloc0 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:40.228 [2024-05-15 20:12:32.595393] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:40.228 [2024-05-15 20:12:32.595645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:40.228 { 00:20:40.228 "params": { 00:20:40.228 "name": "Nvme$subsystem", 00:20:40.228 "trtype": "$TEST_TRANSPORT", 00:20:40.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.228 "adrfam": "ipv4", 00:20:40.228 "trsvcid": "$NVMF_PORT", 00:20:40.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.228 "hdgst": ${hdgst:-false}, 00:20:40.228 "ddgst": ${ddgst:-false} 00:20:40.228 }, 00:20:40.228 "method": "bdev_nvme_attach_controller" 00:20:40.228 } 00:20:40.228 EOF 00:20:40.228 )") 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:20:40.228 20:12:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:40.228 "params": { 00:20:40.228 "name": "Nvme1", 00:20:40.228 "trtype": "tcp", 00:20:40.228 "traddr": "10.0.0.2", 00:20:40.228 "adrfam": "ipv4", 00:20:40.228 "trsvcid": "4420", 00:20:40.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.228 "hdgst": false, 00:20:40.228 "ddgst": false 00:20:40.228 }, 00:20:40.228 "method": "bdev_nvme_attach_controller" 00:20:40.228 }' 00:20:40.228 [2024-05-15 20:12:32.645402] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:20:40.228 [2024-05-15 20:12:32.645453] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50872 ] 00:20:40.228 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.489 [2024-05-15 20:12:32.733604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:40.489 [2024-05-15 20:12:32.831914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.489 [2024-05-15 20:12:32.832050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.489 [2024-05-15 20:12:32.832054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.748 I/O targets: 00:20:40.748 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:40.748 00:20:40.748 00:20:40.748 CUnit - A unit testing framework for C - Version 2.1-3 00:20:40.748 http://cunit.sourceforge.net/ 00:20:40.748 00:20:40.748 00:20:40.748 Suite: bdevio tests on: Nvme1n1 00:20:40.748 Test: blockdev write read block ...passed 00:20:40.748 Test: blockdev write zeroes read block ...passed 00:20:41.008 Test: blockdev write zeroes read no split ...passed 00:20:41.008 Test: blockdev write zeroes read split ...passed 00:20:41.008 Test: blockdev write zeroes read split partial ...passed 00:20:41.008 Test: blockdev reset ...[2024-05-15 20:12:33.276287] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.008 [2024-05-15 20:12:33.276351] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba340 (9): Bad file descriptor 00:20:41.008 [2024-05-15 20:12:33.298101] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:41.008 passed 00:20:41.008 Test: blockdev write read 8 blocks ...passed 00:20:41.008 Test: blockdev write read size > 128k ...passed 00:20:41.008 Test: blockdev write read invalid size ...passed 00:20:41.008 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:41.008 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:41.008 Test: blockdev write read max offset ...passed 00:20:41.008 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:41.008 Test: blockdev writev readv 8 blocks ...passed 00:20:41.008 Test: blockdev writev readv 30 x 1block ...passed 00:20:41.268 Test: blockdev writev readv block ...passed 00:20:41.268 Test: blockdev writev readv size > 128k ...passed 00:20:41.268 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:41.268 Test: blockdev comparev and writev ...[2024-05-15 20:12:33.526062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.268 [2024-05-15 20:12:33.526088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.268 [2024-05-15 20:12:33.526099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.268 [2024-05-15 20:12:33.526105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.268 [2024-05-15 20:12:33.526653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.268 [2024-05-15 20:12:33.526663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:41.268 [2024-05-15 20:12:33.526672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.268 [2024-05-15 20:12:33.526678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:41.268 [2024-05-15 20:12:33.527204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.268 [2024-05-15 20:12:33.527213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:41.268 [2024-05-15 20:12:33.527223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.268 [2024-05-15 20:12:33.527228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:41.268 [2024-05-15 20:12:33.527758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.268 [2024-05-15 20:12:33.527766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:41.268 [2024-05-15 20:12:33.527776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.268 [2024-05-15 20:12:33.527781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:41.268 passed 00:20:41.268 Test: blockdev nvme passthru rw ...passed 00:20:41.268 Test: blockdev nvme passthru vendor specific ...[2024-05-15 20:12:33.612115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:41.268 [2024-05-15 20:12:33.612128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:41.268 [2024-05-15 20:12:33.612572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:41.268 [2024-05-15 20:12:33.612581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:41.268 [2024-05-15 20:12:33.613009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:41.268 [2024-05-15 20:12:33.613018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:41.268 [2024-05-15 20:12:33.613448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:41.268 [2024-05-15 20:12:33.613457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:41.268 passed 00:20:41.268 Test: blockdev nvme admin passthru ...passed 00:20:41.268 Test: blockdev copy ...passed 00:20:41.268 00:20:41.268 Run Summary: Type Total Ran Passed Failed Inactive 00:20:41.268 suites 1 1 n/a 0 0 00:20:41.268 tests 23 23 23 0 0 00:20:41.268 asserts 152 152 152 0 n/a 00:20:41.268 00:20:41.268 Elapsed time = 1.040 seconds 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:41.529 rmmod nvme_tcp 00:20:41.529 rmmod nvme_fabrics 00:20:41.529 rmmod nvme_keyring 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 50587 ']' 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 50587 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 50587 ']' 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 50587 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 50587 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 50587' 00:20:41.529 killing process with pid 50587 00:20:41.529 20:12:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 50587 00:20:41.530 [2024-05-15 20:12:33.926084] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:41.530 20:12:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 50587 00:20:41.790 20:12:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:41.790 20:12:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:41.790 20:12:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:41.790 20:12:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:41.790 20:12:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:41.790 20:12:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.790 20:12:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.790 20:12:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.702 20:12:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:43.702 00:20:43.702 real 0m13.015s 00:20:43.702 user 0m13.955s 00:20:43.702 sys 0m6.718s 00:20:43.964 20:12:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:43.964 20:12:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:43.964 ************************************ 00:20:43.964 END TEST nvmf_bdevio 00:20:43.964 ************************************ 00:20:43.964 20:12:36 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:43.964 20:12:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:43.964 20:12:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:43.964 20:12:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:43.964 ************************************ 00:20:43.964 START TEST nvmf_auth_target 00:20:43.964 ************************************ 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:43.964 * Looking for test storage... 00:20:43.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:43.964 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:43.965 20:12:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:52.114 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:52.115 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:52.115 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:52.115 Found net devices under 0000:31:00.0: cvl_0_0 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:52.115 Found net devices under 0000:31:00.1: cvl_0_1 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.115 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:52.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:20:52.376 00:20:52.376 --- 10.0.0.2 ping statistics --- 00:20:52.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.376 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:20:52.376 00:20:52.376 --- 10.0.0.1 ping statistics --- 00:20:52.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.376 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=55879 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 55879 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 55879 ']' 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:52.376 20:12:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=55930 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:53.320 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=de5a8feef8c43316a4703d31fb791935018e0ccb1ac06979 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.XiM 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key de5a8feef8c43316a4703d31fb791935018e0ccb1ac06979 0 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 de5a8feef8c43316a4703d31fb791935018e0ccb1ac06979 0 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=de5a8feef8c43316a4703d31fb791935018e0ccb1ac06979 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.XiM 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.XiM 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.XiM 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ffa7beff38073d4a3cafedf9efed67c7 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.IjL 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ffa7beff38073d4a3cafedf9efed67c7 1 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ffa7beff38073d4a3cafedf9efed67c7 1 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ffa7beff38073d4a3cafedf9efed67c7 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.IjL 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.IjL 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.IjL 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=28493065835df9f7d0fe8f2422a2f6e789c82902d0645701 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ylz 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 28493065835df9f7d0fe8f2422a2f6e789c82902d0645701 2 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 28493065835df9f7d0fe8f2422a2f6e789c82902d0645701 2 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=28493065835df9f7d0fe8f2422a2f6e789c82902d0645701 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:53.582 20:12:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ylz 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ylz 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.ylz 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c98efd35a8942b192d65eb84e2c1b5b48c3429cac001549917cbea75d898418e 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Yvg 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c98efd35a8942b192d65eb84e2c1b5b48c3429cac001549917cbea75d898418e 3 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c98efd35a8942b192d65eb84e2c1b5b48c3429cac001549917cbea75d898418e 3 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c98efd35a8942b192d65eb84e2c1b5b48c3429cac001549917cbea75d898418e 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:53.582 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Yvg 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Yvg 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.Yvg 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 55879 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 55879 ']' 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 55930 /var/tmp/host.sock 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 55930 ']' 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:53.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:53.844 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.105 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:54.105 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:20:54.105 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:20:54.105 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.105 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.105 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.105 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:54.105 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.XiM 00:20:54.105 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.105 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.105 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.105 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.XiM 00:20:54.105 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.XiM 00:20:54.366 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:54.366 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.IjL 00:20:54.366 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.366 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.366 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.366 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.IjL 00:20:54.366 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.IjL 00:20:54.627 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:54.627 20:12:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ylz 00:20:54.627 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.627 20:12:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.627 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.627 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ylz 00:20:54.627 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ylz 00:20:54.888 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:54.888 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Yvg 00:20:54.888 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.888 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.888 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.888 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Yvg 00:20:54.888 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Yvg 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:55.149 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:55.409 00:20:55.670 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:55.670 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:55.670 20:12:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.670 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.670 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.670 20:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.670 20:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.670 20:12:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.670 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:55.670 { 00:20:55.670 "cntlid": 1, 00:20:55.670 "qid": 0, 00:20:55.670 "state": "enabled", 00:20:55.670 "listen_address": { 00:20:55.670 "trtype": "TCP", 00:20:55.670 "adrfam": "IPv4", 00:20:55.670 "traddr": "10.0.0.2", 00:20:55.670 "trsvcid": "4420" 00:20:55.670 }, 00:20:55.670 "peer_address": { 00:20:55.670 "trtype": "TCP", 00:20:55.671 "adrfam": "IPv4", 00:20:55.671 "traddr": "10.0.0.1", 00:20:55.671 "trsvcid": "38044" 00:20:55.671 }, 00:20:55.671 "auth": { 00:20:55.671 "state": "completed", 00:20:55.671 "digest": "sha256", 00:20:55.671 "dhgroup": "null" 00:20:55.671 } 00:20:55.671 } 00:20:55.671 ]' 00:20:55.671 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:55.932 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.932 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:55.932 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:55.932 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:55.932 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.932 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.932 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.193 20:12:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:20:56.766 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.766 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:56.766 20:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.766 20:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.766 20:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.766 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:56.766 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:56.766 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:57.026 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:20:57.026 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:57.026 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:57.026 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:57.026 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:57.026 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:57.026 20:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.026 20:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.026 20:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.026 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:57.026 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:57.287 00:20:57.287 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:57.287 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:57.287 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.547 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.547 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.547 20:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.547 20:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.547 20:12:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.547 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:57.547 { 00:20:57.547 "cntlid": 3, 00:20:57.547 "qid": 0, 00:20:57.547 "state": "enabled", 00:20:57.547 "listen_address": { 00:20:57.547 "trtype": "TCP", 00:20:57.547 "adrfam": "IPv4", 00:20:57.547 "traddr": "10.0.0.2", 00:20:57.547 "trsvcid": "4420" 00:20:57.547 }, 00:20:57.547 "peer_address": { 00:20:57.547 "trtype": "TCP", 00:20:57.547 "adrfam": "IPv4", 00:20:57.547 "traddr": "10.0.0.1", 00:20:57.547 "trsvcid": "56410" 00:20:57.547 }, 00:20:57.547 "auth": { 00:20:57.547 "state": "completed", 00:20:57.547 "digest": "sha256", 00:20:57.547 "dhgroup": "null" 00:20:57.547 } 00:20:57.547 } 00:20:57.547 ]' 00:20:57.547 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:57.547 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.547 20:12:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:57.547 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:57.547 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:57.808 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.808 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.808 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.808 20:12:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:20:58.750 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.750 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:58.750 20:12:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.750 20:12:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.750 20:12:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.750 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:58.750 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:58.750 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:59.012 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:20:59.012 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:59.012 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:59.012 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:59.012 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:59.012 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:20:59.012 20:12:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.012 20:12:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.012 20:12:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.012 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:59.012 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:59.273 00:20:59.273 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:59.273 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:59.273 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.273 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.273 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.273 20:12:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.273 20:12:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.535 20:12:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.535 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:59.535 { 00:20:59.535 "cntlid": 5, 00:20:59.535 "qid": 0, 00:20:59.535 "state": "enabled", 00:20:59.535 "listen_address": { 00:20:59.535 "trtype": "TCP", 00:20:59.535 "adrfam": "IPv4", 00:20:59.535 "traddr": "10.0.0.2", 00:20:59.535 "trsvcid": "4420" 00:20:59.535 }, 00:20:59.535 "peer_address": { 00:20:59.535 "trtype": "TCP", 00:20:59.535 "adrfam": "IPv4", 00:20:59.535 "traddr": "10.0.0.1", 00:20:59.535 "trsvcid": "56434" 00:20:59.535 }, 00:20:59.535 "auth": { 00:20:59.535 "state": "completed", 00:20:59.535 "digest": "sha256", 00:20:59.535 "dhgroup": "null" 00:20:59.535 } 00:20:59.535 } 00:20:59.535 ]' 00:20:59.535 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:59.535 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.535 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:59.535 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:59.535 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:59.535 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.535 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.535 20:12:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.795 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:21:00.368 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.368 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:00.368 20:12:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.368 20:12:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.368 20:12:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.368 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:00.368 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:00.368 20:12:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:00.628 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:21:00.628 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:00.628 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:00.628 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:00.628 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:00.628 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:00.628 20:12:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.628 20:12:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.628 20:12:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.628 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.628 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.888 00:21:00.888 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:00.888 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.888 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:01.149 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.149 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.149 20:12:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.149 20:12:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.149 20:12:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.149 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:01.149 { 00:21:01.149 "cntlid": 7, 00:21:01.149 "qid": 0, 00:21:01.149 "state": "enabled", 00:21:01.149 "listen_address": { 00:21:01.149 "trtype": "TCP", 00:21:01.149 "adrfam": "IPv4", 00:21:01.149 "traddr": "10.0.0.2", 00:21:01.149 "trsvcid": "4420" 00:21:01.149 }, 00:21:01.149 "peer_address": { 00:21:01.149 "trtype": "TCP", 00:21:01.149 "adrfam": "IPv4", 00:21:01.149 "traddr": "10.0.0.1", 00:21:01.149 "trsvcid": "56466" 00:21:01.149 }, 00:21:01.149 "auth": { 00:21:01.149 "state": "completed", 00:21:01.149 "digest": "sha256", 00:21:01.149 "dhgroup": "null" 00:21:01.149 } 00:21:01.149 } 00:21:01.149 ]' 00:21:01.149 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:01.149 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:01.149 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:01.149 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:01.149 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:01.409 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.410 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.410 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.410 20:12:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:21:02.352 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.352 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:02.352 20:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.352 20:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.352 20:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.352 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.352 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:02.352 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:02.352 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:02.614 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:21:02.614 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:02.614 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:02.614 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:02.614 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:02.614 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:21:02.614 20:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.614 20:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.614 20:12:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.614 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:02.614 20:12:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:02.876 00:21:02.876 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:02.876 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:02.876 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.876 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.876 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.876 20:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.876 20:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.138 20:12:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.138 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:03.138 { 00:21:03.138 "cntlid": 9, 00:21:03.138 "qid": 0, 00:21:03.138 "state": "enabled", 00:21:03.138 "listen_address": { 00:21:03.138 "trtype": "TCP", 00:21:03.138 "adrfam": "IPv4", 00:21:03.138 "traddr": "10.0.0.2", 00:21:03.138 "trsvcid": "4420" 00:21:03.138 }, 00:21:03.138 "peer_address": { 00:21:03.138 "trtype": "TCP", 00:21:03.138 "adrfam": "IPv4", 00:21:03.138 "traddr": "10.0.0.1", 00:21:03.138 "trsvcid": "56488" 00:21:03.138 }, 00:21:03.138 "auth": { 00:21:03.138 "state": "completed", 00:21:03.138 "digest": "sha256", 00:21:03.138 "dhgroup": "ffdhe2048" 00:21:03.138 } 00:21:03.138 } 00:21:03.138 ]' 00:21:03.138 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:03.138 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:03.138 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:03.138 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.138 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:03.138 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.138 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.138 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.399 20:12:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:21:03.970 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.230 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:04.230 20:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.230 20:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.230 20:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.230 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:04.230 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:04.230 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:04.230 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:21:04.230 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:04.230 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:04.230 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:04.230 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:04.231 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:21:04.231 20:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.231 20:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.231 20:12:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.231 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:04.231 20:12:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:04.491 00:21:04.783 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:04.783 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:04.783 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.783 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.783 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.783 20:12:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.783 20:12:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.783 20:12:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.783 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:04.783 { 00:21:04.783 "cntlid": 11, 00:21:04.783 "qid": 0, 00:21:04.783 "state": "enabled", 00:21:04.783 "listen_address": { 00:21:04.783 "trtype": "TCP", 00:21:04.783 "adrfam": "IPv4", 00:21:04.783 "traddr": "10.0.0.2", 00:21:04.783 "trsvcid": "4420" 00:21:04.783 }, 00:21:04.783 "peer_address": { 00:21:04.783 "trtype": "TCP", 00:21:04.783 "adrfam": "IPv4", 00:21:04.783 "traddr": "10.0.0.1", 00:21:04.783 "trsvcid": "56514" 00:21:04.783 }, 00:21:04.783 "auth": { 00:21:04.783 "state": "completed", 00:21:04.783 "digest": "sha256", 00:21:04.783 "dhgroup": "ffdhe2048" 00:21:04.783 } 00:21:04.783 } 00:21:04.783 ]' 00:21:04.783 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:05.097 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:05.097 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:05.097 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.097 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:05.097 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.097 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.097 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.097 20:12:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:06.054 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:06.316 00:21:06.577 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:06.577 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:06.577 20:12:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.577 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.577 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.577 20:12:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.577 20:12:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.577 20:12:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.577 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:06.577 { 00:21:06.577 "cntlid": 13, 00:21:06.577 "qid": 0, 00:21:06.577 "state": "enabled", 00:21:06.577 "listen_address": { 00:21:06.577 "trtype": "TCP", 00:21:06.577 "adrfam": "IPv4", 00:21:06.577 "traddr": "10.0.0.2", 00:21:06.577 "trsvcid": "4420" 00:21:06.577 }, 00:21:06.577 "peer_address": { 00:21:06.577 "trtype": "TCP", 00:21:06.577 "adrfam": "IPv4", 00:21:06.577 "traddr": "10.0.0.1", 00:21:06.577 "trsvcid": "56538" 00:21:06.577 }, 00:21:06.577 "auth": { 00:21:06.577 "state": "completed", 00:21:06.577 "digest": "sha256", 00:21:06.577 "dhgroup": "ffdhe2048" 00:21:06.577 } 00:21:06.577 } 00:21:06.577 ]' 00:21:06.577 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:06.838 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:06.838 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:06.838 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.838 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:06.838 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.838 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.838 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.099 20:12:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:21:07.671 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.671 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:07.671 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.671 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.671 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.671 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:07.671 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:07.671 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:07.931 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:21:07.931 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:07.931 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:07.931 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:07.931 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:07.931 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:07.931 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.931 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.931 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.931 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:07.931 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.192 00:21:08.192 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:08.192 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:08.193 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.453 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.453 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.453 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.453 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.453 20:13:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.453 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:08.453 { 00:21:08.453 "cntlid": 15, 00:21:08.453 "qid": 0, 00:21:08.453 "state": "enabled", 00:21:08.453 "listen_address": { 00:21:08.453 "trtype": "TCP", 00:21:08.453 "adrfam": "IPv4", 00:21:08.453 "traddr": "10.0.0.2", 00:21:08.453 "trsvcid": "4420" 00:21:08.453 }, 00:21:08.453 "peer_address": { 00:21:08.453 "trtype": "TCP", 00:21:08.453 "adrfam": "IPv4", 00:21:08.453 "traddr": "10.0.0.1", 00:21:08.453 "trsvcid": "41500" 00:21:08.453 }, 00:21:08.453 "auth": { 00:21:08.453 "state": "completed", 00:21:08.453 "digest": "sha256", 00:21:08.453 "dhgroup": "ffdhe2048" 00:21:08.453 } 00:21:08.453 } 00:21:08.453 ]' 00:21:08.453 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:08.453 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:08.453 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:08.453 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.453 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:08.715 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.715 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.715 20:13:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.715 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:21:09.657 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.657 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:09.657 20:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.657 20:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.657 20:13:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.657 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.657 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:09.657 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:09.657 20:13:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:09.917 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:21:09.917 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:09.917 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:09.917 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:09.917 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:09.917 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:21:09.917 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.917 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.917 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.917 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:09.917 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:10.178 00:21:10.178 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:10.178 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:10.178 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.439 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.439 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.439 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.439 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.439 20:13:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.439 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:10.439 { 00:21:10.439 "cntlid": 17, 00:21:10.439 "qid": 0, 00:21:10.439 "state": "enabled", 00:21:10.439 "listen_address": { 00:21:10.439 "trtype": "TCP", 00:21:10.439 "adrfam": "IPv4", 00:21:10.439 "traddr": "10.0.0.2", 00:21:10.439 "trsvcid": "4420" 00:21:10.439 }, 00:21:10.439 "peer_address": { 00:21:10.439 "trtype": "TCP", 00:21:10.439 "adrfam": "IPv4", 00:21:10.439 "traddr": "10.0.0.1", 00:21:10.439 "trsvcid": "41540" 00:21:10.439 }, 00:21:10.439 "auth": { 00:21:10.439 "state": "completed", 00:21:10.439 "digest": "sha256", 00:21:10.439 "dhgroup": "ffdhe3072" 00:21:10.439 } 00:21:10.439 } 00:21:10.439 ]' 00:21:10.439 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:10.439 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:10.439 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:10.439 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.439 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:10.439 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.439 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.439 20:13:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.700 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:21:11.644 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.644 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:11.644 20:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.644 20:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.644 20:13:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.644 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:11.644 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:11.644 20:13:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:11.644 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:21:11.644 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:11.644 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:11.644 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:11.644 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:11.644 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:21:11.644 20:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.644 20:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.644 20:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.644 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:11.644 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:11.945 00:21:11.945 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:11.945 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:11.945 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.205 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.205 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.205 20:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.205 20:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.205 20:13:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.205 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:12.205 { 00:21:12.205 "cntlid": 19, 00:21:12.205 "qid": 0, 00:21:12.205 "state": "enabled", 00:21:12.205 "listen_address": { 00:21:12.205 "trtype": "TCP", 00:21:12.205 "adrfam": "IPv4", 00:21:12.205 "traddr": "10.0.0.2", 00:21:12.205 "trsvcid": "4420" 00:21:12.205 }, 00:21:12.205 "peer_address": { 00:21:12.205 "trtype": "TCP", 00:21:12.205 "adrfam": "IPv4", 00:21:12.205 "traddr": "10.0.0.1", 00:21:12.205 "trsvcid": "41576" 00:21:12.205 }, 00:21:12.205 "auth": { 00:21:12.205 "state": "completed", 00:21:12.205 "digest": "sha256", 00:21:12.205 "dhgroup": "ffdhe3072" 00:21:12.205 } 00:21:12.205 } 00:21:12.205 ]' 00:21:12.205 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:12.205 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:12.205 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:12.205 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:12.205 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:12.465 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.465 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.465 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.465 20:13:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:13.406 20:13:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:13.668 00:21:13.929 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:13.929 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.929 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:13.929 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.929 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.929 20:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.929 20:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.929 20:13:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.929 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:13.929 { 00:21:13.929 "cntlid": 21, 00:21:13.929 "qid": 0, 00:21:13.929 "state": "enabled", 00:21:13.929 "listen_address": { 00:21:13.929 "trtype": "TCP", 00:21:13.929 "adrfam": "IPv4", 00:21:13.929 "traddr": "10.0.0.2", 00:21:13.929 "trsvcid": "4420" 00:21:13.929 }, 00:21:13.929 "peer_address": { 00:21:13.929 "trtype": "TCP", 00:21:13.929 "adrfam": "IPv4", 00:21:13.929 "traddr": "10.0.0.1", 00:21:13.929 "trsvcid": "41612" 00:21:13.929 }, 00:21:13.929 "auth": { 00:21:13.929 "state": "completed", 00:21:13.929 "digest": "sha256", 00:21:13.929 "dhgroup": "ffdhe3072" 00:21:13.929 } 00:21:13.929 } 00:21:13.929 ]' 00:21:13.929 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:14.189 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:14.189 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:14.189 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:14.189 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:14.189 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.189 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.189 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.450 20:13:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:21:15.020 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.020 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:15.020 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.020 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.020 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.020 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:15.020 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:15.281 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:15.281 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:21:15.281 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:15.281 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:15.281 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:15.281 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:15.281 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:15.281 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.281 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.281 20:13:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.281 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.281 20:13:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:15.541 00:21:15.541 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:15.541 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:15.541 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.802 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.802 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.802 20:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.802 20:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.802 20:13:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.802 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:15.802 { 00:21:15.802 "cntlid": 23, 00:21:15.802 "qid": 0, 00:21:15.802 "state": "enabled", 00:21:15.802 "listen_address": { 00:21:15.802 "trtype": "TCP", 00:21:15.802 "adrfam": "IPv4", 00:21:15.802 "traddr": "10.0.0.2", 00:21:15.802 "trsvcid": "4420" 00:21:15.802 }, 00:21:15.802 "peer_address": { 00:21:15.802 "trtype": "TCP", 00:21:15.802 "adrfam": "IPv4", 00:21:15.802 "traddr": "10.0.0.1", 00:21:15.802 "trsvcid": "41636" 00:21:15.803 }, 00:21:15.803 "auth": { 00:21:15.803 "state": "completed", 00:21:15.803 "digest": "sha256", 00:21:15.803 "dhgroup": "ffdhe3072" 00:21:15.803 } 00:21:15.803 } 00:21:15.803 ]' 00:21:15.803 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:15.803 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:15.803 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:16.063 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.063 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:16.063 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.063 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.063 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.325 20:13:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:21:16.895 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.895 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:16.895 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.895 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.895 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.895 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:16.895 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:16.895 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:16.895 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:17.155 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:21:17.155 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:17.155 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:17.156 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:17.156 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:17.156 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:21:17.156 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.156 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.156 20:13:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.156 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:17.156 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:17.416 00:21:17.416 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:17.416 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:17.416 20:13:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.676 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.676 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.676 20:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.676 20:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.676 20:13:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.676 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:17.676 { 00:21:17.676 "cntlid": 25, 00:21:17.676 "qid": 0, 00:21:17.676 "state": "enabled", 00:21:17.676 "listen_address": { 00:21:17.676 "trtype": "TCP", 00:21:17.676 "adrfam": "IPv4", 00:21:17.676 "traddr": "10.0.0.2", 00:21:17.676 "trsvcid": "4420" 00:21:17.676 }, 00:21:17.676 "peer_address": { 00:21:17.676 "trtype": "TCP", 00:21:17.676 "adrfam": "IPv4", 00:21:17.676 "traddr": "10.0.0.1", 00:21:17.676 "trsvcid": "59164" 00:21:17.676 }, 00:21:17.676 "auth": { 00:21:17.676 "state": "completed", 00:21:17.676 "digest": "sha256", 00:21:17.676 "dhgroup": "ffdhe4096" 00:21:17.676 } 00:21:17.676 } 00:21:17.676 ]' 00:21:17.676 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:17.676 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:17.676 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:17.936 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.936 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:17.936 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.936 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.936 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.196 20:13:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:21:18.768 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.768 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:18.768 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.768 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.768 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.768 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:18.768 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:18.768 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:19.030 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:21:19.030 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:19.030 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:19.030 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:19.030 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:19.030 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:21:19.030 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.030 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.030 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.030 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:19.030 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:19.290 00:21:19.290 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:19.290 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.290 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:19.550 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.550 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.550 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.550 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.550 20:13:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.550 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:19.550 { 00:21:19.550 "cntlid": 27, 00:21:19.550 "qid": 0, 00:21:19.550 "state": "enabled", 00:21:19.550 "listen_address": { 00:21:19.550 "trtype": "TCP", 00:21:19.550 "adrfam": "IPv4", 00:21:19.550 "traddr": "10.0.0.2", 00:21:19.550 "trsvcid": "4420" 00:21:19.550 }, 00:21:19.550 "peer_address": { 00:21:19.550 "trtype": "TCP", 00:21:19.550 "adrfam": "IPv4", 00:21:19.550 "traddr": "10.0.0.1", 00:21:19.550 "trsvcid": "59178" 00:21:19.550 }, 00:21:19.550 "auth": { 00:21:19.550 "state": "completed", 00:21:19.550 "digest": "sha256", 00:21:19.550 "dhgroup": "ffdhe4096" 00:21:19.550 } 00:21:19.550 } 00:21:19.550 ]' 00:21:19.550 20:13:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:19.550 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:19.550 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:19.811 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.811 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:19.811 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.811 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.811 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.811 20:13:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:21:20.753 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.753 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.753 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.753 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.753 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.753 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:20.753 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:20.753 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:21.014 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:21:21.014 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:21.014 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:21.014 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:21.014 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:21.014 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:21:21.014 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.014 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.014 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.014 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:21.014 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:21.275 00:21:21.275 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:21.275 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:21.275 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.536 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.536 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.536 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.536 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.536 20:13:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.536 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:21.536 { 00:21:21.536 "cntlid": 29, 00:21:21.536 "qid": 0, 00:21:21.536 "state": "enabled", 00:21:21.536 "listen_address": { 00:21:21.536 "trtype": "TCP", 00:21:21.536 "adrfam": "IPv4", 00:21:21.536 "traddr": "10.0.0.2", 00:21:21.536 "trsvcid": "4420" 00:21:21.536 }, 00:21:21.536 "peer_address": { 00:21:21.536 "trtype": "TCP", 00:21:21.536 "adrfam": "IPv4", 00:21:21.536 "traddr": "10.0.0.1", 00:21:21.536 "trsvcid": "59212" 00:21:21.536 }, 00:21:21.536 "auth": { 00:21:21.536 "state": "completed", 00:21:21.536 "digest": "sha256", 00:21:21.536 "dhgroup": "ffdhe4096" 00:21:21.536 } 00:21:21.536 } 00:21:21.536 ]' 00:21:21.536 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:21.536 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:21.536 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:21.536 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.536 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:21.536 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.536 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.536 20:13:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.796 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:21:22.738 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.738 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:22.738 20:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.738 20:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.738 20:13:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.738 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:22.738 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:22.738 20:13:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:22.738 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:21:22.738 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:22.738 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:22.738 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:22.738 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:22.738 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:22.738 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.738 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.738 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.738 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:22.738 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.000 00:21:23.000 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:23.000 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:23.000 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.261 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.261 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.261 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.261 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.261 20:13:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.261 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:23.261 { 00:21:23.261 "cntlid": 31, 00:21:23.261 "qid": 0, 00:21:23.261 "state": "enabled", 00:21:23.261 "listen_address": { 00:21:23.261 "trtype": "TCP", 00:21:23.261 "adrfam": "IPv4", 00:21:23.261 "traddr": "10.0.0.2", 00:21:23.261 "trsvcid": "4420" 00:21:23.261 }, 00:21:23.261 "peer_address": { 00:21:23.261 "trtype": "TCP", 00:21:23.261 "adrfam": "IPv4", 00:21:23.261 "traddr": "10.0.0.1", 00:21:23.261 "trsvcid": "59232" 00:21:23.261 }, 00:21:23.261 "auth": { 00:21:23.261 "state": "completed", 00:21:23.261 "digest": "sha256", 00:21:23.261 "dhgroup": "ffdhe4096" 00:21:23.261 } 00:21:23.261 } 00:21:23.261 ]' 00:21:23.261 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:23.261 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:23.261 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:23.261 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:23.261 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:23.522 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.522 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.522 20:13:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.522 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:21:24.464 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.464 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:24.464 20:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.464 20:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.464 20:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.464 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:24.464 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:24.464 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:24.464 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:24.725 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:21:24.725 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:24.725 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:24.725 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:24.725 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:24.725 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:21:24.725 20:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.725 20:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.726 20:13:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.726 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:24.726 20:13:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:24.987 00:21:24.987 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:24.987 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:24.987 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.248 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.248 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.248 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.248 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.248 20:13:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.248 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:25.248 { 00:21:25.248 "cntlid": 33, 00:21:25.248 "qid": 0, 00:21:25.248 "state": "enabled", 00:21:25.248 "listen_address": { 00:21:25.248 "trtype": "TCP", 00:21:25.248 "adrfam": "IPv4", 00:21:25.248 "traddr": "10.0.0.2", 00:21:25.248 "trsvcid": "4420" 00:21:25.248 }, 00:21:25.248 "peer_address": { 00:21:25.248 "trtype": "TCP", 00:21:25.248 "adrfam": "IPv4", 00:21:25.248 "traddr": "10.0.0.1", 00:21:25.248 "trsvcid": "59260" 00:21:25.248 }, 00:21:25.248 "auth": { 00:21:25.248 "state": "completed", 00:21:25.248 "digest": "sha256", 00:21:25.248 "dhgroup": "ffdhe6144" 00:21:25.248 } 00:21:25.248 } 00:21:25.248 ]' 00:21:25.248 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:25.248 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:25.248 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:25.248 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.248 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:25.511 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.511 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.511 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.511 20:13:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.455 20:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.717 20:13:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.717 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:26.717 20:13:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:26.978 00:21:26.978 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:26.978 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:26.978 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.239 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.239 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.239 20:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.239 20:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.239 20:13:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.239 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:27.239 { 00:21:27.239 "cntlid": 35, 00:21:27.239 "qid": 0, 00:21:27.239 "state": "enabled", 00:21:27.239 "listen_address": { 00:21:27.239 "trtype": "TCP", 00:21:27.239 "adrfam": "IPv4", 00:21:27.239 "traddr": "10.0.0.2", 00:21:27.239 "trsvcid": "4420" 00:21:27.239 }, 00:21:27.239 "peer_address": { 00:21:27.239 "trtype": "TCP", 00:21:27.239 "adrfam": "IPv4", 00:21:27.239 "traddr": "10.0.0.1", 00:21:27.239 "trsvcid": "59296" 00:21:27.239 }, 00:21:27.239 "auth": { 00:21:27.239 "state": "completed", 00:21:27.239 "digest": "sha256", 00:21:27.239 "dhgroup": "ffdhe6144" 00:21:27.239 } 00:21:27.239 } 00:21:27.239 ]' 00:21:27.239 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:27.239 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:27.239 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:27.239 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.239 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:27.500 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.500 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.500 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.500 20:13:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:21:28.442 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.442 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:28.442 20:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.442 20:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.442 20:13:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.442 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:28.442 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:28.442 20:13:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:28.703 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:21:28.703 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:28.703 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:28.703 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:28.703 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.703 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:21:28.703 20:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.704 20:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.704 20:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.704 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:28.704 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:28.964 00:21:28.964 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:28.964 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:28.964 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.226 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.226 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.226 20:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.226 20:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.226 20:13:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.226 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:29.226 { 00:21:29.226 "cntlid": 37, 00:21:29.226 "qid": 0, 00:21:29.226 "state": "enabled", 00:21:29.226 "listen_address": { 00:21:29.226 "trtype": "TCP", 00:21:29.226 "adrfam": "IPv4", 00:21:29.226 "traddr": "10.0.0.2", 00:21:29.226 "trsvcid": "4420" 00:21:29.226 }, 00:21:29.226 "peer_address": { 00:21:29.226 "trtype": "TCP", 00:21:29.226 "adrfam": "IPv4", 00:21:29.226 "traddr": "10.0.0.1", 00:21:29.226 "trsvcid": "33538" 00:21:29.226 }, 00:21:29.226 "auth": { 00:21:29.226 "state": "completed", 00:21:29.226 "digest": "sha256", 00:21:29.226 "dhgroup": "ffdhe6144" 00:21:29.226 } 00:21:29.226 } 00:21:29.226 ]' 00:21:29.226 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:29.226 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:29.226 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:29.487 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.487 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:29.487 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.487 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.487 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.487 20:13:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:21:30.429 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.429 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:30.429 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.429 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.429 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.429 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:30.429 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:30.429 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:30.690 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:21:30.690 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:30.690 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:30.690 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:30.690 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.690 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:30.690 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.690 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.690 20:13:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.690 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.690 20:13:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.950 00:21:30.950 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:30.950 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.950 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:31.210 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.210 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.210 20:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.210 20:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.210 20:13:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.210 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:31.210 { 00:21:31.210 "cntlid": 39, 00:21:31.210 "qid": 0, 00:21:31.210 "state": "enabled", 00:21:31.210 "listen_address": { 00:21:31.210 "trtype": "TCP", 00:21:31.210 "adrfam": "IPv4", 00:21:31.210 "traddr": "10.0.0.2", 00:21:31.210 "trsvcid": "4420" 00:21:31.210 }, 00:21:31.210 "peer_address": { 00:21:31.210 "trtype": "TCP", 00:21:31.210 "adrfam": "IPv4", 00:21:31.210 "traddr": "10.0.0.1", 00:21:31.210 "trsvcid": "33558" 00:21:31.210 }, 00:21:31.210 "auth": { 00:21:31.210 "state": "completed", 00:21:31.210 "digest": "sha256", 00:21:31.210 "dhgroup": "ffdhe6144" 00:21:31.210 } 00:21:31.210 } 00:21:31.210 ]' 00:21:31.210 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:31.210 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:31.210 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:31.210 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:31.210 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:31.470 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.470 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.470 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.471 20:13:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:21:32.412 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.412 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:32.412 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.412 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.412 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.412 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.412 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:32.412 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:32.412 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:32.673 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:21:32.673 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:32.673 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:32.673 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:32.673 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:32.673 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:21:32.673 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.673 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.673 20:13:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.673 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:32.673 20:13:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:33.244 00:21:33.244 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:33.244 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:33.244 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.505 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.505 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.505 20:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.505 20:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.505 20:13:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.505 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:33.505 { 00:21:33.505 "cntlid": 41, 00:21:33.505 "qid": 0, 00:21:33.505 "state": "enabled", 00:21:33.505 "listen_address": { 00:21:33.505 "trtype": "TCP", 00:21:33.505 "adrfam": "IPv4", 00:21:33.505 "traddr": "10.0.0.2", 00:21:33.505 "trsvcid": "4420" 00:21:33.505 }, 00:21:33.505 "peer_address": { 00:21:33.505 "trtype": "TCP", 00:21:33.505 "adrfam": "IPv4", 00:21:33.505 "traddr": "10.0.0.1", 00:21:33.505 "trsvcid": "33586" 00:21:33.505 }, 00:21:33.505 "auth": { 00:21:33.505 "state": "completed", 00:21:33.505 "digest": "sha256", 00:21:33.505 "dhgroup": "ffdhe8192" 00:21:33.505 } 00:21:33.505 } 00:21:33.505 ]' 00:21:33.505 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:33.505 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:33.505 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:33.505 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.505 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:33.505 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.505 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.505 20:13:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.766 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:21:34.727 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.727 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:34.727 20:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.727 20:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.727 20:13:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.727 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:34.727 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:34.727 20:13:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:34.727 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:21:34.727 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:34.727 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:34.727 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:34.727 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:34.727 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:21:34.727 20:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.727 20:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.727 20:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.727 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:34.727 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:35.329 00:21:35.329 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:35.329 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:35.329 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.589 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.590 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.590 20:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.590 20:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.590 20:13:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.590 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:35.590 { 00:21:35.590 "cntlid": 43, 00:21:35.590 "qid": 0, 00:21:35.590 "state": "enabled", 00:21:35.590 "listen_address": { 00:21:35.590 "trtype": "TCP", 00:21:35.590 "adrfam": "IPv4", 00:21:35.590 "traddr": "10.0.0.2", 00:21:35.590 "trsvcid": "4420" 00:21:35.590 }, 00:21:35.590 "peer_address": { 00:21:35.590 "trtype": "TCP", 00:21:35.590 "adrfam": "IPv4", 00:21:35.590 "traddr": "10.0.0.1", 00:21:35.590 "trsvcid": "33614" 00:21:35.590 }, 00:21:35.590 "auth": { 00:21:35.590 "state": "completed", 00:21:35.590 "digest": "sha256", 00:21:35.590 "dhgroup": "ffdhe8192" 00:21:35.590 } 00:21:35.590 } 00:21:35.590 ]' 00:21:35.590 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:35.590 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:35.590 20:13:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:35.590 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.590 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:35.590 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.590 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.590 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.850 20:13:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:36.793 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:37.735 00:21:37.735 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:37.735 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:37.735 20:13:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.735 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.735 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.735 20:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.735 20:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.735 20:13:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.735 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:37.735 { 00:21:37.735 "cntlid": 45, 00:21:37.735 "qid": 0, 00:21:37.735 "state": "enabled", 00:21:37.735 "listen_address": { 00:21:37.735 "trtype": "TCP", 00:21:37.735 "adrfam": "IPv4", 00:21:37.735 "traddr": "10.0.0.2", 00:21:37.735 "trsvcid": "4420" 00:21:37.735 }, 00:21:37.735 "peer_address": { 00:21:37.735 "trtype": "TCP", 00:21:37.735 "adrfam": "IPv4", 00:21:37.735 "traddr": "10.0.0.1", 00:21:37.735 "trsvcid": "39956" 00:21:37.735 }, 00:21:37.735 "auth": { 00:21:37.735 "state": "completed", 00:21:37.735 "digest": "sha256", 00:21:37.735 "dhgroup": "ffdhe8192" 00:21:37.735 } 00:21:37.735 } 00:21:37.735 ]' 00:21:37.735 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:37.735 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:37.735 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:37.735 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.735 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:37.996 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.996 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.996 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.996 20:13:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.939 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.201 20:13:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.201 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.201 20:13:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.772 00:21:39.772 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:39.772 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.772 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:39.772 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.772 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.772 20:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.772 20:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.033 20:13:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.033 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:40.033 { 00:21:40.033 "cntlid": 47, 00:21:40.033 "qid": 0, 00:21:40.033 "state": "enabled", 00:21:40.033 "listen_address": { 00:21:40.033 "trtype": "TCP", 00:21:40.033 "adrfam": "IPv4", 00:21:40.033 "traddr": "10.0.0.2", 00:21:40.033 "trsvcid": "4420" 00:21:40.033 }, 00:21:40.033 "peer_address": { 00:21:40.033 "trtype": "TCP", 00:21:40.033 "adrfam": "IPv4", 00:21:40.033 "traddr": "10.0.0.1", 00:21:40.033 "trsvcid": "39980" 00:21:40.033 }, 00:21:40.033 "auth": { 00:21:40.033 "state": "completed", 00:21:40.033 "digest": "sha256", 00:21:40.033 "dhgroup": "ffdhe8192" 00:21:40.033 } 00:21:40.033 } 00:21:40.033 ]' 00:21:40.033 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:40.033 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:40.033 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:40.033 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:40.033 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:40.033 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.033 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.033 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.293 20:13:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:41.233 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:41.493 00:21:41.493 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:41.493 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.493 20:13:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:41.754 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.754 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.754 20:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.754 20:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.754 20:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.754 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:41.754 { 00:21:41.754 "cntlid": 49, 00:21:41.754 "qid": 0, 00:21:41.754 "state": "enabled", 00:21:41.754 "listen_address": { 00:21:41.754 "trtype": "TCP", 00:21:41.754 "adrfam": "IPv4", 00:21:41.754 "traddr": "10.0.0.2", 00:21:41.754 "trsvcid": "4420" 00:21:41.754 }, 00:21:41.754 "peer_address": { 00:21:41.754 "trtype": "TCP", 00:21:41.754 "adrfam": "IPv4", 00:21:41.754 "traddr": "10.0.0.1", 00:21:41.754 "trsvcid": "40012" 00:21:41.754 }, 00:21:41.754 "auth": { 00:21:41.754 "state": "completed", 00:21:41.754 "digest": "sha384", 00:21:41.754 "dhgroup": "null" 00:21:41.754 } 00:21:41.754 } 00:21:41.754 ]' 00:21:41.754 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:41.754 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:41.754 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:41.754 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:41.754 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:41.754 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.754 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.754 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.014 20:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:42.957 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:43.218 00:21:43.218 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:43.218 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.218 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:43.478 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.478 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.478 20:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.478 20:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.478 20:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.478 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:43.479 { 00:21:43.479 "cntlid": 51, 00:21:43.479 "qid": 0, 00:21:43.479 "state": "enabled", 00:21:43.479 "listen_address": { 00:21:43.479 "trtype": "TCP", 00:21:43.479 "adrfam": "IPv4", 00:21:43.479 "traddr": "10.0.0.2", 00:21:43.479 "trsvcid": "4420" 00:21:43.479 }, 00:21:43.479 "peer_address": { 00:21:43.479 "trtype": "TCP", 00:21:43.479 "adrfam": "IPv4", 00:21:43.479 "traddr": "10.0.0.1", 00:21:43.479 "trsvcid": "40026" 00:21:43.479 }, 00:21:43.479 "auth": { 00:21:43.479 "state": "completed", 00:21:43.479 "digest": "sha384", 00:21:43.479 "dhgroup": "null" 00:21:43.479 } 00:21:43.479 } 00:21:43.479 ]' 00:21:43.479 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:43.479 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:43.479 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:43.479 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:43.479 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:43.739 20:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.739 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.739 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.739 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:21:44.679 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.679 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:44.679 20:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.679 20:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.679 20:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.679 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:44.679 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:44.679 20:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:44.679 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:21:44.679 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:44.679 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:44.679 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:44.679 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:44.679 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:21:44.679 20:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.679 20:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.679 20:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.679 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:44.679 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:44.939 00:21:45.199 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:45.199 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:45.199 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.199 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.199 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.199 20:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.199 20:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.199 20:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.199 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:45.199 { 00:21:45.199 "cntlid": 53, 00:21:45.199 "qid": 0, 00:21:45.199 "state": "enabled", 00:21:45.199 "listen_address": { 00:21:45.199 "trtype": "TCP", 00:21:45.199 "adrfam": "IPv4", 00:21:45.199 "traddr": "10.0.0.2", 00:21:45.199 "trsvcid": "4420" 00:21:45.199 }, 00:21:45.199 "peer_address": { 00:21:45.199 "trtype": "TCP", 00:21:45.199 "adrfam": "IPv4", 00:21:45.199 "traddr": "10.0.0.1", 00:21:45.199 "trsvcid": "40058" 00:21:45.199 }, 00:21:45.199 "auth": { 00:21:45.199 "state": "completed", 00:21:45.199 "digest": "sha384", 00:21:45.199 "dhgroup": "null" 00:21:45.200 } 00:21:45.200 } 00:21:45.200 ]' 00:21:45.200 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:45.460 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:45.460 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:45.460 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:45.460 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:45.460 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.460 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.460 20:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.721 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:21:46.291 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.291 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:46.291 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.291 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.291 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.291 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:46.291 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:46.291 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:46.550 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:21:46.550 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:46.550 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:46.550 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:46.550 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:46.550 20:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:46.550 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.550 20:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.550 20:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.550 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.550 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.810 00:21:46.810 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:46.810 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:46.810 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.070 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.070 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.070 20:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.070 20:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.070 20:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.070 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:47.070 { 00:21:47.070 "cntlid": 55, 00:21:47.070 "qid": 0, 00:21:47.070 "state": "enabled", 00:21:47.070 "listen_address": { 00:21:47.070 "trtype": "TCP", 00:21:47.070 "adrfam": "IPv4", 00:21:47.070 "traddr": "10.0.0.2", 00:21:47.070 "trsvcid": "4420" 00:21:47.070 }, 00:21:47.070 "peer_address": { 00:21:47.070 "trtype": "TCP", 00:21:47.071 "adrfam": "IPv4", 00:21:47.071 "traddr": "10.0.0.1", 00:21:47.071 "trsvcid": "40082" 00:21:47.071 }, 00:21:47.071 "auth": { 00:21:47.071 "state": "completed", 00:21:47.071 "digest": "sha384", 00:21:47.071 "dhgroup": "null" 00:21:47.071 } 00:21:47.071 } 00:21:47.071 ]' 00:21:47.071 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:47.071 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:47.071 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:47.331 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:47.331 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:47.331 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.331 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.331 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.592 20:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:21:48.162 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.162 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:48.162 20:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.162 20:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.162 20:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.162 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.162 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:48.162 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:48.162 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:48.422 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:21:48.422 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:48.422 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:48.422 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:48.422 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:48.422 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:21:48.422 20:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.422 20:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.422 20:13:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.422 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:48.422 20:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:48.682 00:21:48.682 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:48.682 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:48.682 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.942 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.942 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.942 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.942 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.942 20:13:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.942 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:48.942 { 00:21:48.942 "cntlid": 57, 00:21:48.942 "qid": 0, 00:21:48.942 "state": "enabled", 00:21:48.942 "listen_address": { 00:21:48.942 "trtype": "TCP", 00:21:48.942 "adrfam": "IPv4", 00:21:48.942 "traddr": "10.0.0.2", 00:21:48.942 "trsvcid": "4420" 00:21:48.942 }, 00:21:48.942 "peer_address": { 00:21:48.942 "trtype": "TCP", 00:21:48.942 "adrfam": "IPv4", 00:21:48.942 "traddr": "10.0.0.1", 00:21:48.942 "trsvcid": "49004" 00:21:48.942 }, 00:21:48.942 "auth": { 00:21:48.942 "state": "completed", 00:21:48.942 "digest": "sha384", 00:21:48.942 "dhgroup": "ffdhe2048" 00:21:48.942 } 00:21:48.942 } 00:21:48.942 ]' 00:21:48.942 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:48.942 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:48.942 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:48.942 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:48.942 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:49.203 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.203 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.203 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.203 20:13:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.147 20:13:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.408 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:50.408 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:50.408 00:21:50.668 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:50.668 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.668 20:13:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:50.668 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.668 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.668 20:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.668 20:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.668 20:13:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.668 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:50.668 { 00:21:50.668 "cntlid": 59, 00:21:50.668 "qid": 0, 00:21:50.668 "state": "enabled", 00:21:50.668 "listen_address": { 00:21:50.668 "trtype": "TCP", 00:21:50.668 "adrfam": "IPv4", 00:21:50.668 "traddr": "10.0.0.2", 00:21:50.668 "trsvcid": "4420" 00:21:50.668 }, 00:21:50.668 "peer_address": { 00:21:50.668 "trtype": "TCP", 00:21:50.668 "adrfam": "IPv4", 00:21:50.668 "traddr": "10.0.0.1", 00:21:50.668 "trsvcid": "49026" 00:21:50.668 }, 00:21:50.668 "auth": { 00:21:50.668 "state": "completed", 00:21:50.668 "digest": "sha384", 00:21:50.668 "dhgroup": "ffdhe2048" 00:21:50.668 } 00:21:50.668 } 00:21:50.668 ]' 00:21:50.668 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:50.928 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:50.928 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:50.928 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:50.928 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:50.928 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.928 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.928 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.189 20:13:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:21:51.760 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:52.022 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:52.282 00:21:52.282 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:52.282 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:52.282 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.542 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.542 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.542 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.542 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.542 20:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.542 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:52.542 { 00:21:52.542 "cntlid": 61, 00:21:52.542 "qid": 0, 00:21:52.542 "state": "enabled", 00:21:52.542 "listen_address": { 00:21:52.542 "trtype": "TCP", 00:21:52.542 "adrfam": "IPv4", 00:21:52.542 "traddr": "10.0.0.2", 00:21:52.542 "trsvcid": "4420" 00:21:52.542 }, 00:21:52.542 "peer_address": { 00:21:52.542 "trtype": "TCP", 00:21:52.542 "adrfam": "IPv4", 00:21:52.542 "traddr": "10.0.0.1", 00:21:52.542 "trsvcid": "49062" 00:21:52.542 }, 00:21:52.542 "auth": { 00:21:52.542 "state": "completed", 00:21:52.542 "digest": "sha384", 00:21:52.542 "dhgroup": "ffdhe2048" 00:21:52.542 } 00:21:52.542 } 00:21:52.542 ]' 00:21:52.542 20:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:52.542 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:52.542 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:52.802 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:52.802 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:52.802 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.803 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.803 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.062 20:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:21:53.633 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.633 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:53.633 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.633 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.633 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.633 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:53.633 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:53.633 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:53.893 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:21:53.893 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:53.893 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:53.894 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:53.894 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:53.894 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:53.894 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.894 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.894 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.894 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.894 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:54.154 00:21:54.154 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:54.154 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:54.154 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.415 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.415 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.415 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.415 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.415 20:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.415 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:54.415 { 00:21:54.415 "cntlid": 63, 00:21:54.415 "qid": 0, 00:21:54.415 "state": "enabled", 00:21:54.415 "listen_address": { 00:21:54.415 "trtype": "TCP", 00:21:54.415 "adrfam": "IPv4", 00:21:54.415 "traddr": "10.0.0.2", 00:21:54.415 "trsvcid": "4420" 00:21:54.415 }, 00:21:54.415 "peer_address": { 00:21:54.415 "trtype": "TCP", 00:21:54.415 "adrfam": "IPv4", 00:21:54.415 "traddr": "10.0.0.1", 00:21:54.415 "trsvcid": "49076" 00:21:54.415 }, 00:21:54.415 "auth": { 00:21:54.415 "state": "completed", 00:21:54.415 "digest": "sha384", 00:21:54.415 "dhgroup": "ffdhe2048" 00:21:54.415 } 00:21:54.415 } 00:21:54.415 ]' 00:21:54.415 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:54.415 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:54.415 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:54.415 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:54.415 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:54.676 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.676 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.676 20:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.676 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:21:55.616 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.616 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:55.616 20:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.616 20:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.616 20:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.616 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.616 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:55.616 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:55.616 20:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:55.877 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:21:55.877 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:55.877 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:55.877 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:55.877 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:55.877 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:21:55.877 20:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.877 20:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.877 20:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.877 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:55.877 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:56.138 00:21:56.138 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:56.138 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:56.138 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.398 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.398 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.398 20:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.398 20:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.398 20:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.398 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:56.398 { 00:21:56.398 "cntlid": 65, 00:21:56.398 "qid": 0, 00:21:56.398 "state": "enabled", 00:21:56.398 "listen_address": { 00:21:56.398 "trtype": "TCP", 00:21:56.398 "adrfam": "IPv4", 00:21:56.398 "traddr": "10.0.0.2", 00:21:56.398 "trsvcid": "4420" 00:21:56.398 }, 00:21:56.398 "peer_address": { 00:21:56.398 "trtype": "TCP", 00:21:56.398 "adrfam": "IPv4", 00:21:56.398 "traddr": "10.0.0.1", 00:21:56.398 "trsvcid": "49104" 00:21:56.398 }, 00:21:56.399 "auth": { 00:21:56.399 "state": "completed", 00:21:56.399 "digest": "sha384", 00:21:56.399 "dhgroup": "ffdhe3072" 00:21:56.399 } 00:21:56.399 } 00:21:56.399 ]' 00:21:56.399 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:56.399 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:56.399 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:56.399 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:56.399 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:56.399 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.399 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.399 20:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.658 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:57.600 20:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:57.861 00:21:57.861 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:57.861 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:57.861 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.121 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.121 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.121 20:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.121 20:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.121 20:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.121 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:58.121 { 00:21:58.121 "cntlid": 67, 00:21:58.121 "qid": 0, 00:21:58.121 "state": "enabled", 00:21:58.121 "listen_address": { 00:21:58.121 "trtype": "TCP", 00:21:58.121 "adrfam": "IPv4", 00:21:58.121 "traddr": "10.0.0.2", 00:21:58.121 "trsvcid": "4420" 00:21:58.121 }, 00:21:58.121 "peer_address": { 00:21:58.121 "trtype": "TCP", 00:21:58.121 "adrfam": "IPv4", 00:21:58.121 "traddr": "10.0.0.1", 00:21:58.121 "trsvcid": "48724" 00:21:58.121 }, 00:21:58.121 "auth": { 00:21:58.121 "state": "completed", 00:21:58.121 "digest": "sha384", 00:21:58.121 "dhgroup": "ffdhe3072" 00:21:58.121 } 00:21:58.121 } 00:21:58.121 ]' 00:21:58.121 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:58.121 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:58.121 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:58.121 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:58.121 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:58.381 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.382 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.382 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.382 20:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:21:59.322 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.322 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:59.322 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.322 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.322 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.322 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:59.322 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:59.322 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:59.322 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:21:59.322 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:59.322 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:59.322 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:59.322 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:59.322 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:21:59.323 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.323 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.323 20:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.323 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:59.323 20:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:59.582 00:21:59.843 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:59.843 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:59.843 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.843 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.843 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.843 20:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.843 20:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.843 20:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.843 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:59.843 { 00:21:59.843 "cntlid": 69, 00:21:59.843 "qid": 0, 00:21:59.843 "state": "enabled", 00:21:59.843 "listen_address": { 00:21:59.843 "trtype": "TCP", 00:21:59.843 "adrfam": "IPv4", 00:21:59.843 "traddr": "10.0.0.2", 00:21:59.843 "trsvcid": "4420" 00:21:59.843 }, 00:21:59.843 "peer_address": { 00:21:59.843 "trtype": "TCP", 00:21:59.843 "adrfam": "IPv4", 00:21:59.843 "traddr": "10.0.0.1", 00:21:59.843 "trsvcid": "48756" 00:21:59.843 }, 00:21:59.843 "auth": { 00:21:59.843 "state": "completed", 00:21:59.843 "digest": "sha384", 00:21:59.843 "dhgroup": "ffdhe3072" 00:21:59.843 } 00:21:59.843 } 00:21:59.843 ]' 00:21:59.843 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:00.105 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:00.105 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:00.105 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:00.105 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:00.105 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.105 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.105 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.366 20:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:22:00.939 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.939 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:00.939 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.939 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.939 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.939 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:00.939 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:00.939 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:01.199 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:22:01.199 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:01.199 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:01.199 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:01.199 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:01.199 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:01.199 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.199 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.199 20:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.199 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.199 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.459 00:22:01.459 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:01.459 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.459 20:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:01.719 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.719 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.719 20:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.719 20:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.719 20:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.719 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:01.719 { 00:22:01.719 "cntlid": 71, 00:22:01.719 "qid": 0, 00:22:01.719 "state": "enabled", 00:22:01.719 "listen_address": { 00:22:01.719 "trtype": "TCP", 00:22:01.719 "adrfam": "IPv4", 00:22:01.719 "traddr": "10.0.0.2", 00:22:01.719 "trsvcid": "4420" 00:22:01.719 }, 00:22:01.719 "peer_address": { 00:22:01.719 "trtype": "TCP", 00:22:01.719 "adrfam": "IPv4", 00:22:01.719 "traddr": "10.0.0.1", 00:22:01.719 "trsvcid": "48776" 00:22:01.719 }, 00:22:01.719 "auth": { 00:22:01.719 "state": "completed", 00:22:01.719 "digest": "sha384", 00:22:01.719 "dhgroup": "ffdhe3072" 00:22:01.719 } 00:22:01.719 } 00:22:01.719 ]' 00:22:01.719 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:01.719 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:01.719 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:01.979 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:01.979 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:01.979 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.979 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.979 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.238 20:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:22:02.808 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.808 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:02.808 20:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.808 20:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.808 20:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.808 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:02.808 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:02.808 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:02.808 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:03.069 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:22:03.069 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:03.069 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:03.069 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:03.069 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:03.069 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:22:03.069 20:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.069 20:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.069 20:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.069 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:03.069 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:03.330 00:22:03.331 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:03.331 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:03.331 20:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.591 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.591 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.591 20:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.591 20:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.591 20:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.591 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:03.591 { 00:22:03.591 "cntlid": 73, 00:22:03.591 "qid": 0, 00:22:03.591 "state": "enabled", 00:22:03.591 "listen_address": { 00:22:03.591 "trtype": "TCP", 00:22:03.591 "adrfam": "IPv4", 00:22:03.591 "traddr": "10.0.0.2", 00:22:03.591 "trsvcid": "4420" 00:22:03.591 }, 00:22:03.591 "peer_address": { 00:22:03.591 "trtype": "TCP", 00:22:03.591 "adrfam": "IPv4", 00:22:03.591 "traddr": "10.0.0.1", 00:22:03.591 "trsvcid": "48802" 00:22:03.591 }, 00:22:03.591 "auth": { 00:22:03.591 "state": "completed", 00:22:03.591 "digest": "sha384", 00:22:03.592 "dhgroup": "ffdhe4096" 00:22:03.592 } 00:22:03.592 } 00:22:03.592 ]' 00:22:03.592 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:03.592 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:03.592 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:03.852 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.852 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:03.852 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.852 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.852 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.113 20:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:22:04.767 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.767 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:04.767 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.767 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.767 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.767 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:04.767 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:04.767 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:05.029 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:22:05.029 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:05.029 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:05.029 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:05.029 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:05.029 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:05.029 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.029 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.029 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.029 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:05.029 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:05.290 00:22:05.290 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:05.290 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:05.290 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.551 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.551 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.551 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.551 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.551 20:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.551 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:05.551 { 00:22:05.551 "cntlid": 75, 00:22:05.551 "qid": 0, 00:22:05.551 "state": "enabled", 00:22:05.551 "listen_address": { 00:22:05.551 "trtype": "TCP", 00:22:05.551 "adrfam": "IPv4", 00:22:05.551 "traddr": "10.0.0.2", 00:22:05.551 "trsvcid": "4420" 00:22:05.551 }, 00:22:05.551 "peer_address": { 00:22:05.551 "trtype": "TCP", 00:22:05.551 "adrfam": "IPv4", 00:22:05.551 "traddr": "10.0.0.1", 00:22:05.551 "trsvcid": "48828" 00:22:05.551 }, 00:22:05.551 "auth": { 00:22:05.551 "state": "completed", 00:22:05.551 "digest": "sha384", 00:22:05.551 "dhgroup": "ffdhe4096" 00:22:05.551 } 00:22:05.551 } 00:22:05.551 ]' 00:22:05.551 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:05.551 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:05.551 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:05.551 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:05.551 20:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:05.551 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.551 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.551 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.811 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:22:06.753 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.753 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:06.753 20:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.753 20:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.753 20:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.753 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:06.753 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:06.753 20:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:06.753 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:22:06.753 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:06.753 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:06.753 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:06.753 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:06.753 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:22:06.753 20:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.753 20:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.753 20:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.753 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:06.753 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:07.325 00:22:07.325 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:07.325 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:07.325 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.325 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.325 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.325 20:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.325 20:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.325 20:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.325 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:07.325 { 00:22:07.325 "cntlid": 77, 00:22:07.325 "qid": 0, 00:22:07.325 "state": "enabled", 00:22:07.325 "listen_address": { 00:22:07.325 "trtype": "TCP", 00:22:07.325 "adrfam": "IPv4", 00:22:07.326 "traddr": "10.0.0.2", 00:22:07.326 "trsvcid": "4420" 00:22:07.326 }, 00:22:07.326 "peer_address": { 00:22:07.326 "trtype": "TCP", 00:22:07.326 "adrfam": "IPv4", 00:22:07.326 "traddr": "10.0.0.1", 00:22:07.326 "trsvcid": "48860" 00:22:07.326 }, 00:22:07.326 "auth": { 00:22:07.326 "state": "completed", 00:22:07.326 "digest": "sha384", 00:22:07.326 "dhgroup": "ffdhe4096" 00:22:07.326 } 00:22:07.326 } 00:22:07.326 ]' 00:22:07.326 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:07.326 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:07.326 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:07.586 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:07.587 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:07.587 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.587 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.587 20:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.847 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:22:08.418 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.418 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:08.418 20:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.418 20:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.418 20:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.418 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:08.418 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:08.418 20:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:08.679 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:22:08.679 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:08.679 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:08.679 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:08.679 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:08.679 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:08.679 20:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.679 20:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.679 20:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.679 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.679 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.940 00:22:08.940 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:08.940 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:08.940 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.201 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.201 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.201 20:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.201 20:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.201 20:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.201 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:09.201 { 00:22:09.201 "cntlid": 79, 00:22:09.201 "qid": 0, 00:22:09.201 "state": "enabled", 00:22:09.201 "listen_address": { 00:22:09.201 "trtype": "TCP", 00:22:09.201 "adrfam": "IPv4", 00:22:09.201 "traddr": "10.0.0.2", 00:22:09.201 "trsvcid": "4420" 00:22:09.201 }, 00:22:09.201 "peer_address": { 00:22:09.201 "trtype": "TCP", 00:22:09.201 "adrfam": "IPv4", 00:22:09.201 "traddr": "10.0.0.1", 00:22:09.201 "trsvcid": "39742" 00:22:09.201 }, 00:22:09.201 "auth": { 00:22:09.201 "state": "completed", 00:22:09.201 "digest": "sha384", 00:22:09.201 "dhgroup": "ffdhe4096" 00:22:09.201 } 00:22:09.201 } 00:22:09.201 ]' 00:22:09.201 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:09.201 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:09.201 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:09.462 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:09.462 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:09.462 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.462 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.462 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.462 20:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:22:10.404 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.404 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:10.404 20:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.404 20:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.404 20:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.404 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:10.404 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:10.404 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:10.404 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:10.665 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:22:10.665 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:10.665 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:10.665 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:10.665 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:10.665 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:22:10.665 20:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.665 20:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.665 20:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.665 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:10.665 20:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:10.925 00:22:10.925 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:10.925 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.925 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:11.185 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.185 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.185 20:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.185 20:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.185 20:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.185 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:11.185 { 00:22:11.185 "cntlid": 81, 00:22:11.185 "qid": 0, 00:22:11.185 "state": "enabled", 00:22:11.185 "listen_address": { 00:22:11.185 "trtype": "TCP", 00:22:11.185 "adrfam": "IPv4", 00:22:11.185 "traddr": "10.0.0.2", 00:22:11.185 "trsvcid": "4420" 00:22:11.185 }, 00:22:11.185 "peer_address": { 00:22:11.185 "trtype": "TCP", 00:22:11.185 "adrfam": "IPv4", 00:22:11.185 "traddr": "10.0.0.1", 00:22:11.185 "trsvcid": "39760" 00:22:11.185 }, 00:22:11.185 "auth": { 00:22:11.185 "state": "completed", 00:22:11.185 "digest": "sha384", 00:22:11.185 "dhgroup": "ffdhe6144" 00:22:11.185 } 00:22:11.185 } 00:22:11.185 ]' 00:22:11.185 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:11.185 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:11.185 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:11.185 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:11.185 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:11.445 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.446 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.446 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.446 20:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:12.390 20:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:12.961 00:22:12.961 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:12.961 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:12.961 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.223 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.223 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.223 20:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.223 20:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.223 20:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.223 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:13.223 { 00:22:13.223 "cntlid": 83, 00:22:13.223 "qid": 0, 00:22:13.223 "state": "enabled", 00:22:13.223 "listen_address": { 00:22:13.223 "trtype": "TCP", 00:22:13.223 "adrfam": "IPv4", 00:22:13.223 "traddr": "10.0.0.2", 00:22:13.223 "trsvcid": "4420" 00:22:13.223 }, 00:22:13.223 "peer_address": { 00:22:13.223 "trtype": "TCP", 00:22:13.223 "adrfam": "IPv4", 00:22:13.223 "traddr": "10.0.0.1", 00:22:13.223 "trsvcid": "39784" 00:22:13.223 }, 00:22:13.223 "auth": { 00:22:13.223 "state": "completed", 00:22:13.223 "digest": "sha384", 00:22:13.223 "dhgroup": "ffdhe6144" 00:22:13.223 } 00:22:13.223 } 00:22:13.223 ]' 00:22:13.223 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:13.223 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.223 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:13.223 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:13.223 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:13.223 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.223 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.223 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.484 20:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:22:14.055 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.317 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:14.317 20:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.317 20:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.317 20:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.317 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:14.317 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:14.318 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:14.318 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:22:14.318 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:14.318 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:14.318 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:14.318 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:14.318 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:22:14.318 20:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.318 20:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.318 20:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.318 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:14.318 20:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:14.891 00:22:14.891 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:14.891 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:14.891 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.153 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.153 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.153 20:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.153 20:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.153 20:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.153 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:15.153 { 00:22:15.153 "cntlid": 85, 00:22:15.153 "qid": 0, 00:22:15.153 "state": "enabled", 00:22:15.153 "listen_address": { 00:22:15.153 "trtype": "TCP", 00:22:15.153 "adrfam": "IPv4", 00:22:15.153 "traddr": "10.0.0.2", 00:22:15.153 "trsvcid": "4420" 00:22:15.153 }, 00:22:15.153 "peer_address": { 00:22:15.153 "trtype": "TCP", 00:22:15.153 "adrfam": "IPv4", 00:22:15.153 "traddr": "10.0.0.1", 00:22:15.153 "trsvcid": "39830" 00:22:15.153 }, 00:22:15.153 "auth": { 00:22:15.153 "state": "completed", 00:22:15.153 "digest": "sha384", 00:22:15.153 "dhgroup": "ffdhe6144" 00:22:15.153 } 00:22:15.153 } 00:22:15.153 ]' 00:22:15.153 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:15.153 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:15.153 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:15.153 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:15.153 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:15.153 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.153 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.153 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.413 20:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:16.357 20:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:16.929 00:22:16.929 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:16.929 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.929 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:16.929 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.929 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.929 20:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.929 20:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.929 20:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.929 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:16.929 { 00:22:16.929 "cntlid": 87, 00:22:16.929 "qid": 0, 00:22:16.929 "state": "enabled", 00:22:16.929 "listen_address": { 00:22:16.929 "trtype": "TCP", 00:22:16.929 "adrfam": "IPv4", 00:22:16.929 "traddr": "10.0.0.2", 00:22:16.929 "trsvcid": "4420" 00:22:16.929 }, 00:22:16.929 "peer_address": { 00:22:16.929 "trtype": "TCP", 00:22:16.929 "adrfam": "IPv4", 00:22:16.929 "traddr": "10.0.0.1", 00:22:16.929 "trsvcid": "39868" 00:22:16.929 }, 00:22:16.929 "auth": { 00:22:16.929 "state": "completed", 00:22:16.929 "digest": "sha384", 00:22:16.929 "dhgroup": "ffdhe6144" 00:22:16.929 } 00:22:16.929 } 00:22:16.929 ]' 00:22:16.929 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:17.190 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:17.190 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:17.190 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:17.190 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:17.190 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.190 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.190 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.452 20:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:22:18.028 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.028 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:18.028 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.028 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.028 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.028 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:18.028 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:18.028 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:18.028 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:18.288 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:22:18.288 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:18.288 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:18.288 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:18.288 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:18.288 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:22:18.288 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.288 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.288 20:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.288 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:18.288 20:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:18.859 00:22:19.121 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:19.121 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.121 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:19.121 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.121 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.121 20:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.121 20:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.121 20:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.121 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:19.121 { 00:22:19.121 "cntlid": 89, 00:22:19.121 "qid": 0, 00:22:19.121 "state": "enabled", 00:22:19.121 "listen_address": { 00:22:19.121 "trtype": "TCP", 00:22:19.121 "adrfam": "IPv4", 00:22:19.121 "traddr": "10.0.0.2", 00:22:19.121 "trsvcid": "4420" 00:22:19.121 }, 00:22:19.121 "peer_address": { 00:22:19.121 "trtype": "TCP", 00:22:19.121 "adrfam": "IPv4", 00:22:19.121 "traddr": "10.0.0.1", 00:22:19.121 "trsvcid": "48708" 00:22:19.121 }, 00:22:19.121 "auth": { 00:22:19.121 "state": "completed", 00:22:19.121 "digest": "sha384", 00:22:19.121 "dhgroup": "ffdhe8192" 00:22:19.121 } 00:22:19.121 } 00:22:19.121 ]' 00:22:19.121 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:19.383 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:19.383 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:19.383 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:19.383 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:19.383 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.383 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.383 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.644 20:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:22:20.216 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.216 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:20.216 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.216 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.216 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.216 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:20.216 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:20.216 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:20.477 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:22:20.477 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:20.477 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:20.477 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:20.477 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:20.477 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:20.477 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.477 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.477 20:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.477 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:20.477 20:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:21.049 00:22:21.049 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:21.049 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.049 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:21.310 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.310 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.310 20:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.310 20:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.310 20:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.310 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:21.310 { 00:22:21.310 "cntlid": 91, 00:22:21.310 "qid": 0, 00:22:21.310 "state": "enabled", 00:22:21.310 "listen_address": { 00:22:21.310 "trtype": "TCP", 00:22:21.310 "adrfam": "IPv4", 00:22:21.310 "traddr": "10.0.0.2", 00:22:21.310 "trsvcid": "4420" 00:22:21.310 }, 00:22:21.310 "peer_address": { 00:22:21.310 "trtype": "TCP", 00:22:21.310 "adrfam": "IPv4", 00:22:21.310 "traddr": "10.0.0.1", 00:22:21.310 "trsvcid": "48736" 00:22:21.310 }, 00:22:21.310 "auth": { 00:22:21.310 "state": "completed", 00:22:21.310 "digest": "sha384", 00:22:21.310 "dhgroup": "ffdhe8192" 00:22:21.310 } 00:22:21.310 } 00:22:21.310 ]' 00:22:21.310 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:21.310 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:21.310 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:21.572 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:21.572 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:21.572 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.572 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.572 20:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.833 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:22:22.404 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.404 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:22.404 20:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.404 20:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.404 20:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.404 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:22.404 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:22.404 20:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:22.664 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:22:22.664 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:22.664 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:22.664 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:22.664 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:22.664 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:22:22.664 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.664 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.664 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.664 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:22.664 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:23.236 00:22:23.236 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:23.236 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:23.236 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.497 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.497 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.497 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.497 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.497 20:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.497 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:23.497 { 00:22:23.497 "cntlid": 93, 00:22:23.497 "qid": 0, 00:22:23.497 "state": "enabled", 00:22:23.497 "listen_address": { 00:22:23.497 "trtype": "TCP", 00:22:23.497 "adrfam": "IPv4", 00:22:23.497 "traddr": "10.0.0.2", 00:22:23.497 "trsvcid": "4420" 00:22:23.497 }, 00:22:23.497 "peer_address": { 00:22:23.497 "trtype": "TCP", 00:22:23.497 "adrfam": "IPv4", 00:22:23.497 "traddr": "10.0.0.1", 00:22:23.497 "trsvcid": "48758" 00:22:23.497 }, 00:22:23.497 "auth": { 00:22:23.497 "state": "completed", 00:22:23.497 "digest": "sha384", 00:22:23.497 "dhgroup": "ffdhe8192" 00:22:23.497 } 00:22:23.497 } 00:22:23.497 ]' 00:22:23.497 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:23.497 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:23.497 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:23.497 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:23.497 20:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:23.757 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.757 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.757 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.757 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:22:24.696 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.696 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:24.696 20:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.696 20:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.696 20:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.696 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:24.696 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:24.696 20:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:24.696 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:22:24.696 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:24.696 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:24.696 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:24.696 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:24.696 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:24.696 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.696 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.696 20:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.696 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:24.696 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:25.638 00:22:25.638 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:25.638 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:25.638 20:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.638 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.638 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.638 20:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.638 20:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.638 20:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.638 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:25.638 { 00:22:25.638 "cntlid": 95, 00:22:25.638 "qid": 0, 00:22:25.638 "state": "enabled", 00:22:25.638 "listen_address": { 00:22:25.638 "trtype": "TCP", 00:22:25.638 "adrfam": "IPv4", 00:22:25.638 "traddr": "10.0.0.2", 00:22:25.638 "trsvcid": "4420" 00:22:25.638 }, 00:22:25.638 "peer_address": { 00:22:25.638 "trtype": "TCP", 00:22:25.638 "adrfam": "IPv4", 00:22:25.638 "traddr": "10.0.0.1", 00:22:25.638 "trsvcid": "48786" 00:22:25.638 }, 00:22:25.638 "auth": { 00:22:25.638 "state": "completed", 00:22:25.638 "digest": "sha384", 00:22:25.638 "dhgroup": "ffdhe8192" 00:22:25.638 } 00:22:25.638 } 00:22:25.638 ]' 00:22:25.638 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:25.638 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:25.638 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:25.638 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:25.638 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:25.898 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.898 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.898 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.898 20:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:22:26.840 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.840 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:26.840 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.840 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.840 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.840 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:22:26.840 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:26.840 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:26.840 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:26.840 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:27.101 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:22:27.101 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:27.101 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:27.101 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:27.101 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:27.101 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:22:27.101 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.101 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.101 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.101 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:27.101 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:27.362 00:22:27.362 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:27.362 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:27.362 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.362 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.362 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.362 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.362 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.628 20:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.628 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:27.628 { 00:22:27.628 "cntlid": 97, 00:22:27.628 "qid": 0, 00:22:27.628 "state": "enabled", 00:22:27.628 "listen_address": { 00:22:27.628 "trtype": "TCP", 00:22:27.628 "adrfam": "IPv4", 00:22:27.628 "traddr": "10.0.0.2", 00:22:27.628 "trsvcid": "4420" 00:22:27.628 }, 00:22:27.628 "peer_address": { 00:22:27.628 "trtype": "TCP", 00:22:27.628 "adrfam": "IPv4", 00:22:27.628 "traddr": "10.0.0.1", 00:22:27.628 "trsvcid": "51230" 00:22:27.628 }, 00:22:27.628 "auth": { 00:22:27.628 "state": "completed", 00:22:27.628 "digest": "sha512", 00:22:27.628 "dhgroup": "null" 00:22:27.628 } 00:22:27.628 } 00:22:27.628 ]' 00:22:27.628 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:27.628 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.628 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:27.628 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:22:27.628 20:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:27.628 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.628 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.628 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.889 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:22:28.459 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.720 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:28.720 20:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.720 20:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.720 20:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.720 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:28.720 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:28.720 20:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:28.720 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:22:28.720 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:28.720 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:28.720 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:28.720 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:28.720 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:28.720 20:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.720 20:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.720 20:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.720 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:28.720 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:28.980 00:22:28.980 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:28.980 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.980 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:29.240 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.240 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.240 20:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.240 20:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.240 20:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.240 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:29.240 { 00:22:29.240 "cntlid": 99, 00:22:29.240 "qid": 0, 00:22:29.240 "state": "enabled", 00:22:29.240 "listen_address": { 00:22:29.240 "trtype": "TCP", 00:22:29.240 "adrfam": "IPv4", 00:22:29.240 "traddr": "10.0.0.2", 00:22:29.240 "trsvcid": "4420" 00:22:29.240 }, 00:22:29.240 "peer_address": { 00:22:29.240 "trtype": "TCP", 00:22:29.240 "adrfam": "IPv4", 00:22:29.240 "traddr": "10.0.0.1", 00:22:29.240 "trsvcid": "51274" 00:22:29.240 }, 00:22:29.240 "auth": { 00:22:29.240 "state": "completed", 00:22:29.240 "digest": "sha512", 00:22:29.240 "dhgroup": "null" 00:22:29.240 } 00:22:29.240 } 00:22:29.240 ]' 00:22:29.240 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:29.240 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.240 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:29.501 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:22:29.501 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:29.501 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.501 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.501 20:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.762 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:22:30.333 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.333 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:30.333 20:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.333 20:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.333 20:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.333 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:30.333 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:30.333 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:30.599 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:22:30.599 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:30.599 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:30.599 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:30.599 20:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:30.599 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:22:30.599 20:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.599 20:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.599 20:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.599 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:30.599 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:30.858 00:22:30.858 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:30.859 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.859 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:31.119 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.119 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.119 20:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.119 20:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.119 20:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.119 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:31.119 { 00:22:31.119 "cntlid": 101, 00:22:31.119 "qid": 0, 00:22:31.119 "state": "enabled", 00:22:31.119 "listen_address": { 00:22:31.119 "trtype": "TCP", 00:22:31.119 "adrfam": "IPv4", 00:22:31.119 "traddr": "10.0.0.2", 00:22:31.119 "trsvcid": "4420" 00:22:31.119 }, 00:22:31.119 "peer_address": { 00:22:31.119 "trtype": "TCP", 00:22:31.119 "adrfam": "IPv4", 00:22:31.119 "traddr": "10.0.0.1", 00:22:31.119 "trsvcid": "51300" 00:22:31.119 }, 00:22:31.119 "auth": { 00:22:31.119 "state": "completed", 00:22:31.119 "digest": "sha512", 00:22:31.119 "dhgroup": "null" 00:22:31.119 } 00:22:31.119 } 00:22:31.119 ]' 00:22:31.119 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:31.119 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:31.119 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:31.119 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:22:31.119 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:31.380 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.380 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.380 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.380 20:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.322 20:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.583 00:22:32.583 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:32.583 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.583 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:32.844 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.844 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.844 20:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.844 20:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.844 20:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.844 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:32.844 { 00:22:32.844 "cntlid": 103, 00:22:32.844 "qid": 0, 00:22:32.844 "state": "enabled", 00:22:32.844 "listen_address": { 00:22:32.844 "trtype": "TCP", 00:22:32.844 "adrfam": "IPv4", 00:22:32.844 "traddr": "10.0.0.2", 00:22:32.844 "trsvcid": "4420" 00:22:32.844 }, 00:22:32.844 "peer_address": { 00:22:32.844 "trtype": "TCP", 00:22:32.844 "adrfam": "IPv4", 00:22:32.844 "traddr": "10.0.0.1", 00:22:32.844 "trsvcid": "51324" 00:22:32.844 }, 00:22:32.844 "auth": { 00:22:32.844 "state": "completed", 00:22:32.844 "digest": "sha512", 00:22:32.844 "dhgroup": "null" 00:22:32.844 } 00:22:32.844 } 00:22:32.844 ]' 00:22:32.844 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:32.844 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.844 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:33.104 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:22:33.104 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:33.104 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.104 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.104 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.368 20:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:22:34.036 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.036 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:34.036 20:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.036 20:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.036 20:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.036 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.036 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:34.036 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:34.037 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:34.296 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:22:34.296 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:34.296 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:34.296 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:34.296 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:34.296 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:22:34.296 20:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.296 20:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.296 20:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.296 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:34.296 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:34.556 00:22:34.556 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:34.556 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:34.556 20:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.816 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.816 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.816 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.816 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.816 20:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.816 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:34.816 { 00:22:34.816 "cntlid": 105, 00:22:34.816 "qid": 0, 00:22:34.816 "state": "enabled", 00:22:34.816 "listen_address": { 00:22:34.816 "trtype": "TCP", 00:22:34.816 "adrfam": "IPv4", 00:22:34.816 "traddr": "10.0.0.2", 00:22:34.816 "trsvcid": "4420" 00:22:34.816 }, 00:22:34.816 "peer_address": { 00:22:34.816 "trtype": "TCP", 00:22:34.816 "adrfam": "IPv4", 00:22:34.816 "traddr": "10.0.0.1", 00:22:34.816 "trsvcid": "51362" 00:22:34.816 }, 00:22:34.816 "auth": { 00:22:34.816 "state": "completed", 00:22:34.816 "digest": "sha512", 00:22:34.816 "dhgroup": "ffdhe2048" 00:22:34.816 } 00:22:34.816 } 00:22:34.816 ]' 00:22:34.816 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:34.816 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:34.817 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:34.817 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:34.817 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:34.817 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.817 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.817 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.077 20:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:36.020 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:36.281 00:22:36.281 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:36.281 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:36.281 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.542 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.542 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.542 20:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.542 20:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.542 20:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.542 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:36.542 { 00:22:36.542 "cntlid": 107, 00:22:36.542 "qid": 0, 00:22:36.542 "state": "enabled", 00:22:36.542 "listen_address": { 00:22:36.542 "trtype": "TCP", 00:22:36.542 "adrfam": "IPv4", 00:22:36.542 "traddr": "10.0.0.2", 00:22:36.542 "trsvcid": "4420" 00:22:36.542 }, 00:22:36.542 "peer_address": { 00:22:36.542 "trtype": "TCP", 00:22:36.542 "adrfam": "IPv4", 00:22:36.542 "traddr": "10.0.0.1", 00:22:36.542 "trsvcid": "51380" 00:22:36.542 }, 00:22:36.542 "auth": { 00:22:36.542 "state": "completed", 00:22:36.542 "digest": "sha512", 00:22:36.542 "dhgroup": "ffdhe2048" 00:22:36.542 } 00:22:36.542 } 00:22:36.542 ]' 00:22:36.542 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:36.542 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:36.542 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:36.542 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:36.542 20:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:36.542 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.542 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.542 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.803 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:22:37.746 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.746 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:37.746 20:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.746 20:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.746 20:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.746 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:37.746 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:37.746 20:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:37.746 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:22:37.746 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:37.746 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:37.746 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:37.746 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:37.746 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:22:37.746 20:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.746 20:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.746 20:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.746 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:37.746 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:38.006 00:22:38.006 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:38.006 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.006 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:38.267 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.267 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.267 20:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.267 20:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.267 20:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.267 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:38.267 { 00:22:38.267 "cntlid": 109, 00:22:38.267 "qid": 0, 00:22:38.267 "state": "enabled", 00:22:38.267 "listen_address": { 00:22:38.267 "trtype": "TCP", 00:22:38.267 "adrfam": "IPv4", 00:22:38.267 "traddr": "10.0.0.2", 00:22:38.267 "trsvcid": "4420" 00:22:38.267 }, 00:22:38.267 "peer_address": { 00:22:38.267 "trtype": "TCP", 00:22:38.267 "adrfam": "IPv4", 00:22:38.267 "traddr": "10.0.0.1", 00:22:38.267 "trsvcid": "54492" 00:22:38.267 }, 00:22:38.267 "auth": { 00:22:38.267 "state": "completed", 00:22:38.267 "digest": "sha512", 00:22:38.267 "dhgroup": "ffdhe2048" 00:22:38.267 } 00:22:38.267 } 00:22:38.267 ]' 00:22:38.267 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:38.267 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.267 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:38.267 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:38.267 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:38.528 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.528 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.528 20:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.528 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:22:39.470 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.470 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:39.470 20:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.470 20:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.470 20:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.470 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:39.470 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:39.470 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:39.731 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:22:39.731 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:39.731 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:39.731 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:39.731 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:39.731 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:39.731 20:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.731 20:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.731 20:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.731 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:39.731 20:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:39.992 00:22:39.992 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:39.992 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.992 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:39.992 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.992 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.992 20:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.992 20:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.992 20:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.992 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:39.992 { 00:22:39.992 "cntlid": 111, 00:22:39.992 "qid": 0, 00:22:39.992 "state": "enabled", 00:22:39.992 "listen_address": { 00:22:39.992 "trtype": "TCP", 00:22:39.992 "adrfam": "IPv4", 00:22:39.992 "traddr": "10.0.0.2", 00:22:39.992 "trsvcid": "4420" 00:22:39.992 }, 00:22:39.992 "peer_address": { 00:22:39.992 "trtype": "TCP", 00:22:39.992 "adrfam": "IPv4", 00:22:39.992 "traddr": "10.0.0.1", 00:22:39.992 "trsvcid": "54512" 00:22:39.992 }, 00:22:39.992 "auth": { 00:22:39.992 "state": "completed", 00:22:39.992 "digest": "sha512", 00:22:39.992 "dhgroup": "ffdhe2048" 00:22:39.992 } 00:22:39.992 } 00:22:39.992 ]' 00:22:39.992 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:40.252 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.252 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:40.253 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:40.253 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:40.253 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.253 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.253 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.523 20:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:22:41.096 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.096 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:41.096 20:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.096 20:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.096 20:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.096 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.096 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:41.096 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.096 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.357 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:22:41.357 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:41.357 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:41.357 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:41.357 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:41.357 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:22:41.357 20:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.357 20:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.357 20:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.357 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:41.357 20:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:41.619 00:22:41.619 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:41.619 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:41.619 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.880 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.880 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.880 20:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.880 20:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.880 20:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.880 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:41.880 { 00:22:41.880 "cntlid": 113, 00:22:41.880 "qid": 0, 00:22:41.880 "state": "enabled", 00:22:41.880 "listen_address": { 00:22:41.880 "trtype": "TCP", 00:22:41.880 "adrfam": "IPv4", 00:22:41.880 "traddr": "10.0.0.2", 00:22:41.880 "trsvcid": "4420" 00:22:41.880 }, 00:22:41.880 "peer_address": { 00:22:41.880 "trtype": "TCP", 00:22:41.880 "adrfam": "IPv4", 00:22:41.880 "traddr": "10.0.0.1", 00:22:41.880 "trsvcid": "54540" 00:22:41.880 }, 00:22:41.880 "auth": { 00:22:41.880 "state": "completed", 00:22:41.880 "digest": "sha512", 00:22:41.880 "dhgroup": "ffdhe3072" 00:22:41.880 } 00:22:41.880 } 00:22:41.880 ]' 00:22:41.880 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:41.880 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:41.880 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:41.880 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:41.880 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:42.142 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.142 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.142 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.142 20:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:43.085 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:43.345 00:22:43.606 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:43.606 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.606 20:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:43.606 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.606 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.606 20:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.606 20:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.606 20:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.606 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:43.606 { 00:22:43.606 "cntlid": 115, 00:22:43.606 "qid": 0, 00:22:43.606 "state": "enabled", 00:22:43.606 "listen_address": { 00:22:43.606 "trtype": "TCP", 00:22:43.606 "adrfam": "IPv4", 00:22:43.606 "traddr": "10.0.0.2", 00:22:43.606 "trsvcid": "4420" 00:22:43.606 }, 00:22:43.606 "peer_address": { 00:22:43.606 "trtype": "TCP", 00:22:43.606 "adrfam": "IPv4", 00:22:43.606 "traddr": "10.0.0.1", 00:22:43.606 "trsvcid": "54554" 00:22:43.606 }, 00:22:43.606 "auth": { 00:22:43.606 "state": "completed", 00:22:43.606 "digest": "sha512", 00:22:43.606 "dhgroup": "ffdhe3072" 00:22:43.606 } 00:22:43.606 } 00:22:43.606 ]' 00:22:43.606 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:43.868 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.868 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:43.868 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:43.868 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:43.868 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.868 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.868 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.128 20:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:22:44.700 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.700 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:44.700 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.700 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.700 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.700 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:44.701 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:44.701 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:44.961 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:22:44.961 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:44.961 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:44.961 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:44.961 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:44.961 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:22:44.961 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.961 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.961 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.962 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:44.962 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:45.222 00:22:45.222 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:45.222 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.222 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:45.482 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.482 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.482 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.482 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.482 20:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.482 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:45.482 { 00:22:45.482 "cntlid": 117, 00:22:45.482 "qid": 0, 00:22:45.482 "state": "enabled", 00:22:45.482 "listen_address": { 00:22:45.482 "trtype": "TCP", 00:22:45.482 "adrfam": "IPv4", 00:22:45.482 "traddr": "10.0.0.2", 00:22:45.482 "trsvcid": "4420" 00:22:45.482 }, 00:22:45.482 "peer_address": { 00:22:45.482 "trtype": "TCP", 00:22:45.482 "adrfam": "IPv4", 00:22:45.482 "traddr": "10.0.0.1", 00:22:45.482 "trsvcid": "54586" 00:22:45.482 }, 00:22:45.482 "auth": { 00:22:45.482 "state": "completed", 00:22:45.482 "digest": "sha512", 00:22:45.482 "dhgroup": "ffdhe3072" 00:22:45.482 } 00:22:45.482 } 00:22:45.482 ]' 00:22:45.482 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:45.482 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.482 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:45.743 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:45.743 20:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:45.743 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.743 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.743 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.004 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:22:46.575 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.575 20:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:46.575 20:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.575 20:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.575 20:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.575 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:46.575 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:46.575 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:46.836 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:22:46.836 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:46.836 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:46.836 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:46.836 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:46.836 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:46.836 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.836 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.836 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.836 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:46.836 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:47.096 00:22:47.096 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:47.096 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.096 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:47.357 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.357 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.357 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.357 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.357 20:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.357 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:47.357 { 00:22:47.357 "cntlid": 119, 00:22:47.357 "qid": 0, 00:22:47.357 "state": "enabled", 00:22:47.357 "listen_address": { 00:22:47.357 "trtype": "TCP", 00:22:47.357 "adrfam": "IPv4", 00:22:47.357 "traddr": "10.0.0.2", 00:22:47.357 "trsvcid": "4420" 00:22:47.357 }, 00:22:47.357 "peer_address": { 00:22:47.357 "trtype": "TCP", 00:22:47.357 "adrfam": "IPv4", 00:22:47.357 "traddr": "10.0.0.1", 00:22:47.357 "trsvcid": "54626" 00:22:47.357 }, 00:22:47.357 "auth": { 00:22:47.357 "state": "completed", 00:22:47.357 "digest": "sha512", 00:22:47.357 "dhgroup": "ffdhe3072" 00:22:47.357 } 00:22:47.357 } 00:22:47.357 ]' 00:22:47.357 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:47.357 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:47.357 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:47.357 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:47.357 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:47.618 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.618 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.618 20:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.618 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:22:48.561 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.561 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:48.561 20:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.561 20:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.561 20:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.561 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:48.561 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:48.561 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:48.561 20:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:48.561 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:22:48.561 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:48.561 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:48.561 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:48.561 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:48.561 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:22:48.561 20:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.561 20:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.821 20:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.821 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:48.821 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:49.081 00:22:49.081 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:49.081 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:49.081 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.341 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.341 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.341 20:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.341 20:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.341 20:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.341 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:49.341 { 00:22:49.341 "cntlid": 121, 00:22:49.341 "qid": 0, 00:22:49.341 "state": "enabled", 00:22:49.341 "listen_address": { 00:22:49.341 "trtype": "TCP", 00:22:49.341 "adrfam": "IPv4", 00:22:49.341 "traddr": "10.0.0.2", 00:22:49.341 "trsvcid": "4420" 00:22:49.341 }, 00:22:49.341 "peer_address": { 00:22:49.341 "trtype": "TCP", 00:22:49.341 "adrfam": "IPv4", 00:22:49.341 "traddr": "10.0.0.1", 00:22:49.341 "trsvcid": "55620" 00:22:49.341 }, 00:22:49.341 "auth": { 00:22:49.341 "state": "completed", 00:22:49.341 "digest": "sha512", 00:22:49.341 "dhgroup": "ffdhe4096" 00:22:49.341 } 00:22:49.341 } 00:22:49.341 ]' 00:22:49.341 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:49.341 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:49.341 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:49.341 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:49.341 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:49.341 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.341 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.341 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.601 20:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:22:50.173 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.433 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:50.433 20:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.433 20:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.433 20:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.434 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:50.434 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:50.434 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:50.434 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:22:50.434 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:50.434 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:50.434 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:50.434 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:50.434 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:50.434 20:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.434 20:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.434 20:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.434 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:50.434 20:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:50.693 00:22:50.953 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:50.953 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.953 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:50.953 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.953 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.953 20:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.953 20:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.953 20:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.953 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:50.953 { 00:22:50.953 "cntlid": 123, 00:22:50.953 "qid": 0, 00:22:50.953 "state": "enabled", 00:22:50.953 "listen_address": { 00:22:50.953 "trtype": "TCP", 00:22:50.953 "adrfam": "IPv4", 00:22:50.953 "traddr": "10.0.0.2", 00:22:50.953 "trsvcid": "4420" 00:22:50.953 }, 00:22:50.953 "peer_address": { 00:22:50.953 "trtype": "TCP", 00:22:50.953 "adrfam": "IPv4", 00:22:50.953 "traddr": "10.0.0.1", 00:22:50.953 "trsvcid": "55644" 00:22:50.953 }, 00:22:50.953 "auth": { 00:22:50.953 "state": "completed", 00:22:50.953 "digest": "sha512", 00:22:50.953 "dhgroup": "ffdhe4096" 00:22:50.953 } 00:22:50.953 } 00:22:50.953 ]' 00:22:50.953 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:51.214 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:51.214 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:51.214 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:51.214 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:51.214 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.214 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.214 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.475 20:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:22:52.046 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.046 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:52.046 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.046 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.046 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.046 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:52.046 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:52.046 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:52.307 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:22:52.307 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:52.307 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:52.307 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:52.307 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:52.307 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:22:52.307 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.307 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.307 20:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.307 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:52.307 20:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:52.568 00:22:52.568 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:52.568 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:52.568 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.828 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.829 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.829 20:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.829 20:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.829 20:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.829 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:52.829 { 00:22:52.829 "cntlid": 125, 00:22:52.829 "qid": 0, 00:22:52.829 "state": "enabled", 00:22:52.829 "listen_address": { 00:22:52.829 "trtype": "TCP", 00:22:52.829 "adrfam": "IPv4", 00:22:52.829 "traddr": "10.0.0.2", 00:22:52.829 "trsvcid": "4420" 00:22:52.829 }, 00:22:52.829 "peer_address": { 00:22:52.829 "trtype": "TCP", 00:22:52.829 "adrfam": "IPv4", 00:22:52.829 "traddr": "10.0.0.1", 00:22:52.829 "trsvcid": "55672" 00:22:52.829 }, 00:22:52.829 "auth": { 00:22:52.829 "state": "completed", 00:22:52.829 "digest": "sha512", 00:22:52.829 "dhgroup": "ffdhe4096" 00:22:52.829 } 00:22:52.829 } 00:22:52.829 ]' 00:22:52.829 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:52.829 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:52.829 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:53.089 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:53.089 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:53.089 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.089 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.089 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.350 20:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:22:53.921 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.921 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:53.921 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.921 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.921 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.921 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:53.921 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:53.921 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:54.181 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:22:54.181 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:54.181 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:54.181 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:54.181 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:54.181 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:54.181 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.181 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.181 20:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.181 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:54.181 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:54.442 00:22:54.442 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:54.442 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.442 20:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:54.703 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.703 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.703 20:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.703 20:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.703 20:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.703 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:54.703 { 00:22:54.703 "cntlid": 127, 00:22:54.703 "qid": 0, 00:22:54.703 "state": "enabled", 00:22:54.703 "listen_address": { 00:22:54.703 "trtype": "TCP", 00:22:54.703 "adrfam": "IPv4", 00:22:54.703 "traddr": "10.0.0.2", 00:22:54.703 "trsvcid": "4420" 00:22:54.704 }, 00:22:54.704 "peer_address": { 00:22:54.704 "trtype": "TCP", 00:22:54.704 "adrfam": "IPv4", 00:22:54.704 "traddr": "10.0.0.1", 00:22:54.704 "trsvcid": "55688" 00:22:54.704 }, 00:22:54.704 "auth": { 00:22:54.704 "state": "completed", 00:22:54.704 "digest": "sha512", 00:22:54.704 "dhgroup": "ffdhe4096" 00:22:54.704 } 00:22:54.704 } 00:22:54.704 ]' 00:22:54.704 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:54.704 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:54.704 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:54.704 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:54.704 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:54.965 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.965 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.965 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.965 20:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:55.908 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:56.480 00:22:56.480 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:56.480 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:56.480 20:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.741 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.741 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.741 20:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.741 20:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.741 20:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.741 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:56.741 { 00:22:56.741 "cntlid": 129, 00:22:56.741 "qid": 0, 00:22:56.741 "state": "enabled", 00:22:56.741 "listen_address": { 00:22:56.741 "trtype": "TCP", 00:22:56.741 "adrfam": "IPv4", 00:22:56.741 "traddr": "10.0.0.2", 00:22:56.741 "trsvcid": "4420" 00:22:56.741 }, 00:22:56.741 "peer_address": { 00:22:56.741 "trtype": "TCP", 00:22:56.741 "adrfam": "IPv4", 00:22:56.741 "traddr": "10.0.0.1", 00:22:56.741 "trsvcid": "55714" 00:22:56.741 }, 00:22:56.741 "auth": { 00:22:56.741 "state": "completed", 00:22:56.741 "digest": "sha512", 00:22:56.741 "dhgroup": "ffdhe6144" 00:22:56.741 } 00:22:56.741 } 00:22:56.741 ]' 00:22:56.741 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:56.741 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.741 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:56.741 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:56.741 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:56.741 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.741 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.741 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.002 20:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:57.972 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:58.543 00:22:58.544 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:58.544 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:58.544 20:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.544 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.544 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.544 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.544 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.544 20:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.803 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:58.803 { 00:22:58.803 "cntlid": 131, 00:22:58.803 "qid": 0, 00:22:58.803 "state": "enabled", 00:22:58.803 "listen_address": { 00:22:58.803 "trtype": "TCP", 00:22:58.803 "adrfam": "IPv4", 00:22:58.803 "traddr": "10.0.0.2", 00:22:58.803 "trsvcid": "4420" 00:22:58.803 }, 00:22:58.803 "peer_address": { 00:22:58.803 "trtype": "TCP", 00:22:58.803 "adrfam": "IPv4", 00:22:58.803 "traddr": "10.0.0.1", 00:22:58.803 "trsvcid": "58450" 00:22:58.803 }, 00:22:58.803 "auth": { 00:22:58.803 "state": "completed", 00:22:58.803 "digest": "sha512", 00:22:58.803 "dhgroup": "ffdhe6144" 00:22:58.803 } 00:22:58.803 } 00:22:58.803 ]' 00:22:58.803 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:58.803 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:58.803 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:58.803 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:58.803 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:58.803 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.803 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.803 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.063 20:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:22:59.632 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:59.894 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:00.479 00:23:00.479 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:00.479 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:00.479 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.740 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.740 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.740 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.740 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.740 20:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.740 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:00.740 { 00:23:00.740 "cntlid": 133, 00:23:00.740 "qid": 0, 00:23:00.740 "state": "enabled", 00:23:00.740 "listen_address": { 00:23:00.740 "trtype": "TCP", 00:23:00.740 "adrfam": "IPv4", 00:23:00.740 "traddr": "10.0.0.2", 00:23:00.740 "trsvcid": "4420" 00:23:00.740 }, 00:23:00.740 "peer_address": { 00:23:00.740 "trtype": "TCP", 00:23:00.740 "adrfam": "IPv4", 00:23:00.740 "traddr": "10.0.0.1", 00:23:00.740 "trsvcid": "58476" 00:23:00.740 }, 00:23:00.740 "auth": { 00:23:00.740 "state": "completed", 00:23:00.740 "digest": "sha512", 00:23:00.740 "dhgroup": "ffdhe6144" 00:23:00.740 } 00:23:00.740 } 00:23:00.740 ]' 00:23:00.740 20:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:00.740 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:00.740 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:00.740 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:00.740 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:00.740 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.740 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.740 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.008 20:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:23:01.580 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.840 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:01.840 20:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.840 20:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.840 20:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.840 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:01.840 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:01.840 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:02.100 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:23:02.100 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:02.100 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:02.100 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:02.100 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:02.100 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:02.100 20:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.100 20:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.100 20:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.100 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:02.100 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:02.361 00:23:02.361 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:02.361 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:02.361 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.622 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.622 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.622 20:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.622 20:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.622 20:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.622 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:02.622 { 00:23:02.622 "cntlid": 135, 00:23:02.622 "qid": 0, 00:23:02.622 "state": "enabled", 00:23:02.622 "listen_address": { 00:23:02.622 "trtype": "TCP", 00:23:02.622 "adrfam": "IPv4", 00:23:02.622 "traddr": "10.0.0.2", 00:23:02.622 "trsvcid": "4420" 00:23:02.622 }, 00:23:02.622 "peer_address": { 00:23:02.622 "trtype": "TCP", 00:23:02.622 "adrfam": "IPv4", 00:23:02.622 "traddr": "10.0.0.1", 00:23:02.622 "trsvcid": "58498" 00:23:02.622 }, 00:23:02.622 "auth": { 00:23:02.622 "state": "completed", 00:23:02.622 "digest": "sha512", 00:23:02.622 "dhgroup": "ffdhe6144" 00:23:02.622 } 00:23:02.622 } 00:23:02.622 ]' 00:23:02.622 20:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:02.622 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.622 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:02.622 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:02.622 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:02.622 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.622 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.622 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.883 20:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:23:03.899 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.899 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:03.899 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.899 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.899 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.899 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:23:03.899 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:03.899 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:03.899 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:03.899 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:23:03.899 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:03.900 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:03.900 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:03.900 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:03.900 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:23:03.900 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.900 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.900 20:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.900 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:03.900 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:04.483 00:23:04.484 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:04.484 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:04.484 20:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.744 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.744 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.744 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.744 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.744 20:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.744 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:04.744 { 00:23:04.744 "cntlid": 137, 00:23:04.744 "qid": 0, 00:23:04.744 "state": "enabled", 00:23:04.744 "listen_address": { 00:23:04.744 "trtype": "TCP", 00:23:04.744 "adrfam": "IPv4", 00:23:04.744 "traddr": "10.0.0.2", 00:23:04.744 "trsvcid": "4420" 00:23:04.744 }, 00:23:04.744 "peer_address": { 00:23:04.744 "trtype": "TCP", 00:23:04.744 "adrfam": "IPv4", 00:23:04.744 "traddr": "10.0.0.1", 00:23:04.744 "trsvcid": "58512" 00:23:04.744 }, 00:23:04.744 "auth": { 00:23:04.744 "state": "completed", 00:23:04.744 "digest": "sha512", 00:23:04.744 "dhgroup": "ffdhe8192" 00:23:04.744 } 00:23:04.744 } 00:23:04.744 ]' 00:23:04.744 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:04.744 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.744 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:04.744 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:04.744 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:04.744 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.744 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.744 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.005 20:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:05.947 20:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:06.518 00:23:06.518 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:06.518 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:06.518 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.779 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.779 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.779 20:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.779 20:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.779 20:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.779 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:06.779 { 00:23:06.779 "cntlid": 139, 00:23:06.779 "qid": 0, 00:23:06.779 "state": "enabled", 00:23:06.779 "listen_address": { 00:23:06.779 "trtype": "TCP", 00:23:06.779 "adrfam": "IPv4", 00:23:06.779 "traddr": "10.0.0.2", 00:23:06.779 "trsvcid": "4420" 00:23:06.779 }, 00:23:06.779 "peer_address": { 00:23:06.779 "trtype": "TCP", 00:23:06.779 "adrfam": "IPv4", 00:23:06.779 "traddr": "10.0.0.1", 00:23:06.779 "trsvcid": "58540" 00:23:06.779 }, 00:23:06.779 "auth": { 00:23:06.779 "state": "completed", 00:23:06.779 "digest": "sha512", 00:23:06.779 "dhgroup": "ffdhe8192" 00:23:06.779 } 00:23:06.779 } 00:23:06.779 ]' 00:23:06.779 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:07.040 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:07.040 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:07.040 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:07.040 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:07.040 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.040 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.040 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.300 20:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmZhN2JlZmYzODA3M2Q0YTNjYWZlZGY5ZWZlZDY3YzfQuTxi: 00:23:07.873 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.873 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:07.873 20:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.873 20:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.873 20:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.873 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:07.873 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:07.873 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:08.134 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:23:08.134 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:08.134 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:08.134 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:08.134 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:08.134 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 00:23:08.134 20:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.134 20:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.134 20:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.134 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:08.134 20:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:08.706 00:23:08.967 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:08.967 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:08.967 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.967 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.967 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.967 20:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.967 20:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.967 20:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.967 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:08.967 { 00:23:08.967 "cntlid": 141, 00:23:08.967 "qid": 0, 00:23:08.967 "state": "enabled", 00:23:08.967 "listen_address": { 00:23:08.967 "trtype": "TCP", 00:23:08.967 "adrfam": "IPv4", 00:23:08.967 "traddr": "10.0.0.2", 00:23:08.967 "trsvcid": "4420" 00:23:08.967 }, 00:23:08.967 "peer_address": { 00:23:08.967 "trtype": "TCP", 00:23:08.967 "adrfam": "IPv4", 00:23:08.967 "traddr": "10.0.0.1", 00:23:08.967 "trsvcid": "35832" 00:23:08.967 }, 00:23:08.967 "auth": { 00:23:08.967 "state": "completed", 00:23:08.967 "digest": "sha512", 00:23:08.967 "dhgroup": "ffdhe8192" 00:23:08.967 } 00:23:08.967 } 00:23:08.967 ]' 00:23:08.967 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:09.229 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:09.229 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:09.229 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:09.229 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:09.229 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.229 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.229 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.488 20:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:Mjg0OTMwNjU4MzVkZjlmN2QwZmU4ZjI0MjJhMmY2ZTc4OWM4MjkwMmQwNjQ1NzAxxkicmQ==: 00:23:10.059 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.059 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:10.059 20:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.059 20:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.059 20:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.059 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:10.059 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:10.059 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:10.320 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:23:10.320 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:10.320 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:10.320 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:10.320 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:10.320 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:10.320 20:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.320 20:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.320 20:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.320 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:10.320 20:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:10.890 00:23:10.890 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:10.890 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.890 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:11.151 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.151 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.151 20:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.151 20:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.151 20:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.151 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:11.151 { 00:23:11.151 "cntlid": 143, 00:23:11.151 "qid": 0, 00:23:11.151 "state": "enabled", 00:23:11.151 "listen_address": { 00:23:11.151 "trtype": "TCP", 00:23:11.151 "adrfam": "IPv4", 00:23:11.151 "traddr": "10.0.0.2", 00:23:11.151 "trsvcid": "4420" 00:23:11.151 }, 00:23:11.151 "peer_address": { 00:23:11.151 "trtype": "TCP", 00:23:11.151 "adrfam": "IPv4", 00:23:11.151 "traddr": "10.0.0.1", 00:23:11.151 "trsvcid": "35864" 00:23:11.151 }, 00:23:11.151 "auth": { 00:23:11.151 "state": "completed", 00:23:11.151 "digest": "sha512", 00:23:11.151 "dhgroup": "ffdhe8192" 00:23:11.151 } 00:23:11.151 } 00:23:11.151 ]' 00:23:11.151 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:11.151 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:11.151 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:11.411 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:11.411 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:11.411 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.411 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.411 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.672 20:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:Yzk4ZWZkMzVhODk0MmIxOTJkNjVlYjg0ZTJjMWI1YjQ4YzM0MjljYWMwMDE1NDk5MTdjYmVhNzVkODk4NDE4ZSJBSbo=: 00:23:12.244 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.244 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:12.244 20:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.244 20:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.244 20:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.244 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:23:12.244 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:23:12.244 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:23:12.244 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:12.244 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:12.244 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:12.506 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:23:12.506 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:12.506 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:12.506 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:12.506 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:12.506 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 00:23:12.506 20:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.506 20:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.506 20:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.506 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:12.506 20:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:13.079 00:23:13.079 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:13.079 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.079 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:13.340 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.340 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.340 20:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.340 20:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.340 20:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.340 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:13.340 { 00:23:13.340 "cntlid": 145, 00:23:13.340 "qid": 0, 00:23:13.340 "state": "enabled", 00:23:13.340 "listen_address": { 00:23:13.340 "trtype": "TCP", 00:23:13.340 "adrfam": "IPv4", 00:23:13.340 "traddr": "10.0.0.2", 00:23:13.340 "trsvcid": "4420" 00:23:13.340 }, 00:23:13.340 "peer_address": { 00:23:13.340 "trtype": "TCP", 00:23:13.340 "adrfam": "IPv4", 00:23:13.340 "traddr": "10.0.0.1", 00:23:13.340 "trsvcid": "35884" 00:23:13.340 }, 00:23:13.340 "auth": { 00:23:13.340 "state": "completed", 00:23:13.340 "digest": "sha512", 00:23:13.340 "dhgroup": "ffdhe8192" 00:23:13.340 } 00:23:13.340 } 00:23:13.340 ]' 00:23:13.340 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:13.340 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:13.340 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:13.602 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:13.602 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:13.602 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.602 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.602 20:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.862 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZGU1YThmZWVmOGM0MzMxNmE0NzAzZDMxZmI3OTE5MzUwMThlMGNjYjFhYzA2OTc5gt6wRQ==: 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:14.435 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.436 20:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:14.436 20:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:15.007 request: 00:23:15.007 { 00:23:15.007 "name": "nvme0", 00:23:15.007 "trtype": "tcp", 00:23:15.007 "traddr": "10.0.0.2", 00:23:15.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:15.007 "adrfam": "ipv4", 00:23:15.007 "trsvcid": "4420", 00:23:15.007 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:15.007 "dhchap_key": "key2", 00:23:15.007 "method": "bdev_nvme_attach_controller", 00:23:15.007 "req_id": 1 00:23:15.007 } 00:23:15.007 Got JSON-RPC error response 00:23:15.007 response: 00:23:15.007 { 00:23:15.007 "code": -32602, 00:23:15.007 "message": "Invalid parameters" 00:23:15.007 } 00:23:15.007 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:15.007 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:15.007 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:15.007 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:15.007 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:15.007 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.007 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.007 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.007 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:23:15.007 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:23:15.007 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 55930 00:23:15.007 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 55930 ']' 00:23:15.007 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 55930 00:23:15.267 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:23:15.267 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:15.267 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 55930 00:23:15.267 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:15.267 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:15.267 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55930' 00:23:15.267 killing process with pid 55930 00:23:15.267 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 55930 00:23:15.267 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 55930 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:15.527 rmmod nvme_tcp 00:23:15.527 rmmod nvme_fabrics 00:23:15.527 rmmod nvme_keyring 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 55879 ']' 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 55879 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 55879 ']' 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 55879 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 55879 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 55879' 00:23:15.527 killing process with pid 55879 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 55879 00:23:15.527 20:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 55879 00:23:15.788 20:15:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:15.788 20:15:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:15.788 20:15:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:15.788 20:15:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:15.788 20:15:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:15.788 20:15:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.788 20:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.788 20:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.696 20:15:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:17.696 20:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.XiM /tmp/spdk.key-sha256.IjL /tmp/spdk.key-sha384.ylz /tmp/spdk.key-sha512.Yvg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:17.696 00:23:17.696 real 2m33.840s 00:23:17.696 user 5m46.846s 00:23:17.696 sys 0m23.002s 00:23:17.696 20:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:17.696 20:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.696 ************************************ 00:23:17.696 END TEST nvmf_auth_target 00:23:17.696 ************************************ 00:23:17.696 20:15:10 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:23:17.696 20:15:10 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:17.696 20:15:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:23:17.696 20:15:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:17.696 20:15:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:17.958 ************************************ 00:23:17.958 START TEST nvmf_bdevio_no_huge 00:23:17.958 ************************************ 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:17.958 * Looking for test storage... 00:23:17.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.958 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:23:17.959 20:15:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:26.102 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:26.102 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:26.102 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:26.103 Found net devices under 0000:31:00.0: cvl_0_0 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:26.103 Found net devices under 0000:31:00.1: cvl_0_1 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:26.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:23:26.103 00:23:26.103 --- 10.0.0.2 ping statistics --- 00:23:26.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.103 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:23:26.103 00:23:26.103 --- 10.0.0.1 ping statistics --- 00:23:26.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.103 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:26.103 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:26.365 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:26.365 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:26.365 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:26.365 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:26.365 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=89248 00:23:26.365 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 89248 00:23:26.365 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:26.365 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 89248 ']' 00:23:26.365 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.365 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:26.365 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.365 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:26.365 20:15:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:26.365 [2024-05-15 20:15:18.672183] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:23:26.365 [2024-05-15 20:15:18.672249] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:26.365 [2024-05-15 20:15:18.773969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:26.626 [2024-05-15 20:15:18.881751] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.626 [2024-05-15 20:15:18.881799] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.626 [2024-05-15 20:15:18.881808] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.626 [2024-05-15 20:15:18.881815] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.626 [2024-05-15 20:15:18.881821] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.626 [2024-05-15 20:15:18.881978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:26.626 [2024-05-15 20:15:18.882138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:26.626 [2024-05-15 20:15:18.882300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:26.626 [2024-05-15 20:15:18.882301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:27.198 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:27.198 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:27.199 [2024-05-15 20:15:19.630484] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:27.199 Malloc0 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:27.199 [2024-05-15 20:15:19.683795] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:27.199 [2024-05-15 20:15:19.684079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:27.199 { 00:23:27.199 "params": { 00:23:27.199 "name": "Nvme$subsystem", 00:23:27.199 "trtype": "$TEST_TRANSPORT", 00:23:27.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.199 "adrfam": "ipv4", 00:23:27.199 "trsvcid": "$NVMF_PORT", 00:23:27.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.199 "hdgst": ${hdgst:-false}, 00:23:27.199 "ddgst": ${ddgst:-false} 00:23:27.199 }, 00:23:27.199 "method": "bdev_nvme_attach_controller" 00:23:27.199 } 00:23:27.199 EOF 00:23:27.199 )") 00:23:27.199 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:23:27.459 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:23:27.459 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:23:27.459 20:15:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:27.459 "params": { 00:23:27.459 "name": "Nvme1", 00:23:27.459 "trtype": "tcp", 00:23:27.459 "traddr": "10.0.0.2", 00:23:27.459 "adrfam": "ipv4", 00:23:27.459 "trsvcid": "4420", 00:23:27.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.459 "hdgst": false, 00:23:27.459 "ddgst": false 00:23:27.459 }, 00:23:27.459 "method": "bdev_nvme_attach_controller" 00:23:27.459 }' 00:23:27.459 [2024-05-15 20:15:19.743587] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:23:27.459 [2024-05-15 20:15:19.743679] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid89525 ] 00:23:27.459 [2024-05-15 20:15:19.835815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:27.459 [2024-05-15 20:15:19.945949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.459 [2024-05-15 20:15:19.946085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.459 [2024-05-15 20:15:19.946090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.719 I/O targets: 00:23:27.719 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:27.719 00:23:27.719 00:23:27.719 CUnit - A unit testing framework for C - Version 2.1-3 00:23:27.719 http://cunit.sourceforge.net/ 00:23:27.719 00:23:27.719 00:23:27.719 Suite: bdevio tests on: Nvme1n1 00:23:27.719 Test: blockdev write read block ...passed 00:23:27.719 Test: blockdev write zeroes read block ...passed 00:23:27.719 Test: blockdev write zeroes read no split ...passed 00:23:27.719 Test: blockdev write zeroes read split ...passed 00:23:27.981 Test: blockdev write zeroes read split partial ...passed 00:23:27.981 Test: blockdev reset ...[2024-05-15 20:15:20.221486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:27.981 [2024-05-15 20:15:20.221556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18419f0 (9): Bad file descriptor 00:23:27.981 [2024-05-15 20:15:20.233609] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:27.981 passed 00:23:27.981 Test: blockdev write read 8 blocks ...passed 00:23:27.981 Test: blockdev write read size > 128k ...passed 00:23:27.981 Test: blockdev write read invalid size ...passed 00:23:27.981 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:27.981 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:27.981 Test: blockdev write read max offset ...passed 00:23:27.981 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:27.981 Test: blockdev writev readv 8 blocks ...passed 00:23:27.981 Test: blockdev writev readv 30 x 1block ...passed 00:23:28.242 Test: blockdev writev readv block ...passed 00:23:28.242 Test: blockdev writev readv size > 128k ...passed 00:23:28.242 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:28.242 Test: blockdev comparev and writev ...[2024-05-15 20:15:20.501933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:28.242 [2024-05-15 20:15:20.501958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.242 [2024-05-15 20:15:20.501969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:28.242 [2024-05-15 20:15:20.501975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:28.242 [2024-05-15 20:15:20.502492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:28.242 [2024-05-15 20:15:20.502501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:28.242 [2024-05-15 20:15:20.502511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:28.242 [2024-05-15 20:15:20.502516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:28.242 [2024-05-15 20:15:20.503061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:28.242 [2024-05-15 20:15:20.503073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:28.242 [2024-05-15 20:15:20.503082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:28.242 [2024-05-15 20:15:20.503088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:28.242 [2024-05-15 20:15:20.503620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:28.242 [2024-05-15 20:15:20.503629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:28.242 [2024-05-15 20:15:20.503638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:28.242 [2024-05-15 20:15:20.503643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:28.242 passed 00:23:28.242 Test: blockdev nvme passthru rw ...passed 00:23:28.242 Test: blockdev nvme passthru vendor specific ...[2024-05-15 20:15:20.588101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:28.242 [2024-05-15 20:15:20.588112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:28.242 [2024-05-15 20:15:20.588539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:28.242 [2024-05-15 20:15:20.588547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:28.242 [2024-05-15 20:15:20.588950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:28.242 [2024-05-15 20:15:20.588958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:28.242 [2024-05-15 20:15:20.589356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:28.242 [2024-05-15 20:15:20.589364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:28.242 passed 00:23:28.242 Test: blockdev nvme admin passthru ...passed 00:23:28.242 Test: blockdev copy ...passed 00:23:28.242 00:23:28.242 Run Summary: Type Total Ran Passed Failed Inactive 00:23:28.242 suites 1 1 n/a 0 0 00:23:28.242 tests 23 23 23 0 0 00:23:28.242 asserts 152 152 152 0 n/a 00:23:28.242 00:23:28.242 Elapsed time = 1.098 seconds 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:28.504 rmmod nvme_tcp 00:23:28.504 rmmod nvme_fabrics 00:23:28.504 rmmod nvme_keyring 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 89248 ']' 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 89248 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 89248 ']' 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 89248 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:28.504 20:15:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89248 00:23:28.765 20:15:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:23:28.765 20:15:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:23:28.765 20:15:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89248' 00:23:28.765 killing process with pid 89248 00:23:28.765 20:15:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 89248 00:23:28.765 [2024-05-15 20:15:21.046936] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:28.765 20:15:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 89248 00:23:29.027 20:15:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:29.027 20:15:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:29.027 20:15:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:29.027 20:15:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:29.027 20:15:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:29.027 20:15:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.027 20:15:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.027 20:15:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.576 20:15:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:31.576 00:23:31.576 real 0m13.283s 00:23:31.576 user 0m14.067s 00:23:31.576 sys 0m7.207s 00:23:31.576 20:15:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:31.576 20:15:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:31.576 ************************************ 00:23:31.576 END TEST nvmf_bdevio_no_huge 00:23:31.576 ************************************ 00:23:31.576 20:15:23 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:31.576 20:15:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:31.576 20:15:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:31.576 20:15:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:31.576 ************************************ 00:23:31.576 START TEST nvmf_tls 00:23:31.576 ************************************ 00:23:31.576 20:15:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:31.576 * Looking for test storage... 00:23:31.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:31.576 20:15:23 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.576 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:31.576 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.576 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.576 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.576 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.576 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:23:31.577 20:15:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:39.726 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:39.726 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:39.726 Found net devices under 0000:31:00.0: cvl_0_0 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:39.726 Found net devices under 0000:31:00.1: cvl_0_1 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.726 20:15:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.726 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.727 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.727 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:39.727 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.727 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.727 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.727 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:39.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:23:39.727 00:23:39.727 --- 10.0.0.2 ping statistics --- 00:23:39.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.727 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:23:39.727 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:23:39.727 00:23:39.727 --- 10.0.0.1 ping statistics --- 00:23:39.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.727 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:23:39.727 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.727 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=94484 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 94484 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 94484 ']' 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:39.989 20:15:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.989 [2024-05-15 20:15:32.332615] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:23:39.989 [2024-05-15 20:15:32.332678] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.989 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.989 [2024-05-15 20:15:32.412369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.989 [2024-05-15 20:15:32.485540] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.989 [2024-05-15 20:15:32.485579] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.989 [2024-05-15 20:15:32.485587] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.989 [2024-05-15 20:15:32.485594] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.989 [2024-05-15 20:15:32.485599] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.989 [2024-05-15 20:15:32.485618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.951 20:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:40.951 20:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:40.951 20:15:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:40.951 20:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.951 20:15:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.951 20:15:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.951 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:40.951 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:40.951 true 00:23:40.951 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:40.951 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:23:41.212 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:23:41.212 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:41.212 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:41.473 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:41.473 20:15:33 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:23:41.734 20:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:23:41.734 20:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:41.734 20:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:41.995 20:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:41.995 20:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:23:41.995 20:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:23:41.995 20:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:41.995 20:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:41.995 20:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:42.256 20:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:23:42.256 20:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:42.256 20:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:42.517 20:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:42.517 20:15:34 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:42.778 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:23:42.778 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:42.778 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:43.040 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:43.040 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:43.040 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:23:43.040 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:43.040 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:43.040 20:15:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:43.040 20:15:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:43.040 20:15:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:43.040 20:15:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:43.040 20:15:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:43.040 20:15:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.7NCws4YL50 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.eM0TWQqvHe 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.7NCws4YL50 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.eM0TWQqvHe 00:23:43.301 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:43.562 20:15:35 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:43.835 20:15:36 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.7NCws4YL50 00:23:43.835 20:15:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.7NCws4YL50 00:23:43.835 20:15:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:43.835 [2024-05-15 20:15:36.279703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.835 20:15:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:44.102 20:15:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:44.435 [2024-05-15 20:15:36.680691] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:44.435 [2024-05-15 20:15:36.680742] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:44.435 [2024-05-15 20:15:36.680926] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.435 20:15:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:44.435 malloc0 00:23:44.435 20:15:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:44.722 20:15:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7NCws4YL50 00:23:44.983 [2024-05-15 20:15:37.276983] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:44.983 20:15:37 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.7NCws4YL50 00:23:44.983 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.984 Initializing NVMe Controllers 00:23:54.984 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:54.985 Initialization complete. Launching workers. 00:23:54.985 ======================================================== 00:23:54.985 Latency(us) 00:23:54.985 Device Information : IOPS MiB/s Average min max 00:23:54.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13526.63 52.84 4732.05 1015.05 5378.71 00:23:54.985 ======================================================== 00:23:54.985 Total : 13526.63 52.84 4732.05 1015.05 5378.71 00:23:54.985 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7NCws4YL50 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7NCws4YL50' 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=97355 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 97355 /var/tmp/bdevperf.sock 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 97355 ']' 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:54.985 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.985 [2024-05-15 20:15:47.462865] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:23:54.985 [2024-05-15 20:15:47.462920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97355 ] 00:23:55.246 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.246 [2024-05-15 20:15:47.518062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.246 [2024-05-15 20:15:47.570381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.246 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:55.246 20:15:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:55.246 20:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7NCws4YL50 00:23:55.507 [2024-05-15 20:15:47.837796] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.507 [2024-05-15 20:15:47.837853] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:55.507 TLSTESTn1 00:23:55.507 20:15:47 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:55.768 Running I/O for 10 seconds... 00:24:05.768 00:24:05.768 Latency(us) 00:24:05.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.768 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:05.768 Verification LBA range: start 0x0 length 0x2000 00:24:05.768 TLSTESTn1 : 10.06 4266.47 16.67 0.00 0.00 29912.68 5488.64 53957.97 00:24:05.768 =================================================================================================================== 00:24:05.768 Total : 4266.47 16.67 0.00 0.00 29912.68 5488.64 53957.97 00:24:05.768 0 00:24:05.768 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:05.768 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 97355 00:24:05.768 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 97355 ']' 00:24:05.768 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 97355 00:24:05.768 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:05.768 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:05.768 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97355 00:24:05.768 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:05.768 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:05.768 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97355' 00:24:05.768 killing process with pid 97355 00:24:05.769 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 97355 00:24:05.769 Received shutdown signal, test time was about 10.000000 seconds 00:24:05.769 00:24:05.769 Latency(us) 00:24:05.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.769 =================================================================================================================== 00:24:05.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.769 [2024-05-15 20:15:58.201869] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:05.769 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 97355 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eM0TWQqvHe 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eM0TWQqvHe 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eM0TWQqvHe 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eM0TWQqvHe' 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99541 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99541 /var/tmp/bdevperf.sock 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99541 ']' 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:06.029 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.029 [2024-05-15 20:15:58.363676] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:06.029 [2024-05-15 20:15:58.363735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99541 ] 00:24:06.029 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.029 [2024-05-15 20:15:58.418138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.029 [2024-05-15 20:15:58.470140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.290 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:06.290 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:06.290 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eM0TWQqvHe 00:24:06.290 [2024-05-15 20:15:58.721608] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.290 [2024-05-15 20:15:58.721667] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:06.290 [2024-05-15 20:15:58.727920] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:06.290 [2024-05-15 20:15:58.728777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b05700 (107): Transport endpoint is not connected 00:24:06.290 [2024-05-15 20:15:58.729774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b05700 (9): Bad file descriptor 00:24:06.290 [2024-05-15 20:15:58.730775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:06.290 [2024-05-15 20:15:58.730782] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:06.290 [2024-05-15 20:15:58.730792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.290 request: 00:24:06.290 { 00:24:06.290 "name": "TLSTEST", 00:24:06.290 "trtype": "tcp", 00:24:06.290 "traddr": "10.0.0.2", 00:24:06.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:06.290 "adrfam": "ipv4", 00:24:06.290 "trsvcid": "4420", 00:24:06.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.290 "psk": "/tmp/tmp.eM0TWQqvHe", 00:24:06.290 "method": "bdev_nvme_attach_controller", 00:24:06.290 "req_id": 1 00:24:06.290 } 00:24:06.290 Got JSON-RPC error response 00:24:06.290 response: 00:24:06.290 { 00:24:06.290 "code": -32602, 00:24:06.290 "message": "Invalid parameters" 00:24:06.290 } 00:24:06.290 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99541 00:24:06.290 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99541 ']' 00:24:06.290 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99541 00:24:06.290 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:06.290 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:06.290 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99541 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99541' 00:24:06.551 killing process with pid 99541 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99541 00:24:06.551 Received shutdown signal, test time was about 10.000000 seconds 00:24:06.551 00:24:06.551 Latency(us) 00:24:06.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.551 =================================================================================================================== 00:24:06.551 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:06.551 [2024-05-15 20:15:58.801009] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99541 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7NCws4YL50 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7NCws4YL50 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7NCws4YL50 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7NCws4YL50' 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99706 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99706 /var/tmp/bdevperf.sock 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99706 ']' 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:06.551 20:15:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.551 [2024-05-15 20:15:58.952325] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:06.551 [2024-05-15 20:15:58.952378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99706 ] 00:24:06.551 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.551 [2024-05-15 20:15:59.006622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.812 [2024-05-15 20:15:59.057992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.812 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:06.812 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:06.812 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.7NCws4YL50 00:24:06.812 [2024-05-15 20:15:59.313398] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.812 [2024-05-15 20:15:59.313452] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:07.072 [2024-05-15 20:15:59.319257] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:07.072 [2024-05-15 20:15:59.319278] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:07.072 [2024-05-15 20:15:59.319302] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:07.072 [2024-05-15 20:15:59.319437] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86c700 (107): Transport endpoint is not connected 00:24:07.072 [2024-05-15 20:15:59.320408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x86c700 (9): Bad file descriptor 00:24:07.072 [2024-05-15 20:15:59.321409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:07.072 [2024-05-15 20:15:59.321417] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:07.072 [2024-05-15 20:15:59.321424] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.072 request: 00:24:07.072 { 00:24:07.072 "name": "TLSTEST", 00:24:07.072 "trtype": "tcp", 00:24:07.072 "traddr": "10.0.0.2", 00:24:07.072 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:07.072 "adrfam": "ipv4", 00:24:07.072 "trsvcid": "4420", 00:24:07.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.072 "psk": "/tmp/tmp.7NCws4YL50", 00:24:07.072 "method": "bdev_nvme_attach_controller", 00:24:07.072 "req_id": 1 00:24:07.072 } 00:24:07.072 Got JSON-RPC error response 00:24:07.072 response: 00:24:07.072 { 00:24:07.072 "code": -32602, 00:24:07.072 "message": "Invalid parameters" 00:24:07.072 } 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99706 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99706 ']' 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99706 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99706 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99706' 00:24:07.072 killing process with pid 99706 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99706 00:24:07.072 Received shutdown signal, test time was about 10.000000 seconds 00:24:07.072 00:24:07.072 Latency(us) 00:24:07.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.072 =================================================================================================================== 00:24:07.072 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:07.072 [2024-05-15 20:15:59.395021] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99706 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7NCws4YL50 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7NCws4YL50 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7NCws4YL50 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7NCws4YL50' 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99716 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99716 /var/tmp/bdevperf.sock 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99716 ']' 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:07.072 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.072 [2024-05-15 20:15:59.546538] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:07.072 [2024-05-15 20:15:59.546591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99716 ] 00:24:07.333 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.333 [2024-05-15 20:15:59.602704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.333 [2024-05-15 20:15:59.652855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.333 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:07.333 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:07.333 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7NCws4YL50 00:24:07.593 [2024-05-15 20:15:59.916385] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.593 [2024-05-15 20:15:59.916453] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:07.593 [2024-05-15 20:15:59.920894] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:07.593 [2024-05-15 20:15:59.920915] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:07.593 [2024-05-15 20:15:59.920938] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:07.593 [2024-05-15 20:15:59.921583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c98700 (107): Transport endpoint is not connected 00:24:07.593 [2024-05-15 20:15:59.922578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c98700 (9): Bad file descriptor 00:24:07.593 [2024-05-15 20:15:59.923579] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:07.593 [2024-05-15 20:15:59.923587] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:07.593 [2024-05-15 20:15:59.923594] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:07.593 request: 00:24:07.593 { 00:24:07.593 "name": "TLSTEST", 00:24:07.593 "trtype": "tcp", 00:24:07.593 "traddr": "10.0.0.2", 00:24:07.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.593 "adrfam": "ipv4", 00:24:07.593 "trsvcid": "4420", 00:24:07.593 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:07.593 "psk": "/tmp/tmp.7NCws4YL50", 00:24:07.593 "method": "bdev_nvme_attach_controller", 00:24:07.593 "req_id": 1 00:24:07.593 } 00:24:07.593 Got JSON-RPC error response 00:24:07.593 response: 00:24:07.593 { 00:24:07.593 "code": -32602, 00:24:07.593 "message": "Invalid parameters" 00:24:07.593 } 00:24:07.593 20:15:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99716 00:24:07.593 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99716 ']' 00:24:07.593 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99716 00:24:07.593 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:07.593 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:07.593 20:15:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99716 00:24:07.593 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:07.593 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:07.593 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99716' 00:24:07.593 killing process with pid 99716 00:24:07.593 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99716 00:24:07.593 Received shutdown signal, test time was about 10.000000 seconds 00:24:07.593 00:24:07.593 Latency(us) 00:24:07.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.593 =================================================================================================================== 00:24:07.593 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:07.593 [2024-05-15 20:16:00.011436] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:07.593 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99716 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99934 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99934 /var/tmp/bdevperf.sock 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99934 ']' 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.854 [2024-05-15 20:16:00.167955] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:07.854 [2024-05-15 20:16:00.168007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99934 ] 00:24:07.854 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.854 [2024-05-15 20:16:00.224161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.854 [2024-05-15 20:16:00.276109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.854 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:08.115 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:08.115 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:08.115 [2024-05-15 20:16:00.552344] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:08.115 [2024-05-15 20:16:00.554295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93080 (9): Bad file descriptor 00:24:08.115 [2024-05-15 20:16:00.555294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.115 [2024-05-15 20:16:00.555302] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:08.115 [2024-05-15 20:16:00.555309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.115 request: 00:24:08.115 { 00:24:08.115 "name": "TLSTEST", 00:24:08.115 "trtype": "tcp", 00:24:08.115 "traddr": "10.0.0.2", 00:24:08.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:08.115 "adrfam": "ipv4", 00:24:08.115 "trsvcid": "4420", 00:24:08.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.115 "method": "bdev_nvme_attach_controller", 00:24:08.115 "req_id": 1 00:24:08.115 } 00:24:08.115 Got JSON-RPC error response 00:24:08.115 response: 00:24:08.115 { 00:24:08.115 "code": -32602, 00:24:08.115 "message": "Invalid parameters" 00:24:08.115 } 00:24:08.115 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99934 00:24:08.115 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99934 ']' 00:24:08.115 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99934 00:24:08.115 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:08.115 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:08.115 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99934 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99934' 00:24:08.376 killing process with pid 99934 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99934 00:24:08.376 Received shutdown signal, test time was about 10.000000 seconds 00:24:08.376 00:24:08.376 Latency(us) 00:24:08.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.376 =================================================================================================================== 00:24:08.376 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99934 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 94484 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 94484 ']' 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 94484 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94484 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94484' 00:24:08.376 killing process with pid 94484 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 94484 00:24:08.376 [2024-05-15 20:16:00.794641] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:08.376 [2024-05-15 20:16:00.794672] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:08.376 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 94484 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.QHjGbPPhxX 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.QHjGbPPhxX 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100080 00:24:08.636 20:16:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100080 00:24:08.636 20:16:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:08.636 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100080 ']' 00:24:08.636 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.636 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:08.636 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.636 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:08.636 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.636 [2024-05-15 20:16:01.048475] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:08.636 [2024-05-15 20:16:01.048529] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.636 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.636 [2024-05-15 20:16:01.123059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.896 [2024-05-15 20:16:01.188943] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.896 [2024-05-15 20:16:01.188981] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.896 [2024-05-15 20:16:01.188989] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.896 [2024-05-15 20:16:01.188999] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.896 [2024-05-15 20:16:01.189004] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.896 [2024-05-15 20:16:01.189030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.896 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:08.896 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:08.896 20:16:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:08.896 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:08.896 20:16:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.896 20:16:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.896 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.QHjGbPPhxX 00:24:08.896 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QHjGbPPhxX 00:24:08.897 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:09.156 [2024-05-15 20:16:01.506175] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.156 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:09.417 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:09.417 [2024-05-15 20:16:01.895134] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:09.417 [2024-05-15 20:16:01.895183] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:09.417 [2024-05-15 20:16:01.895369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.417 20:16:01 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:09.677 malloc0 00:24:09.677 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:09.936 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QHjGbPPhxX 00:24:10.195 [2024-05-15 20:16:02.483338] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:10.195 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QHjGbPPhxX 00:24:10.195 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:10.195 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:10.195 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:10.195 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QHjGbPPhxX' 00:24:10.195 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:10.195 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:10.196 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100431 00:24:10.196 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:10.196 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100431 /var/tmp/bdevperf.sock 00:24:10.196 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100431 ']' 00:24:10.196 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.196 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:10.196 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.196 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:10.196 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.196 [2024-05-15 20:16:02.526251] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:10.196 [2024-05-15 20:16:02.526297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100431 ] 00:24:10.196 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.196 [2024-05-15 20:16:02.581908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.196 [2024-05-15 20:16:02.633657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.455 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:10.455 20:16:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:10.455 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QHjGbPPhxX 00:24:10.455 [2024-05-15 20:16:02.893169] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:10.455 [2024-05-15 20:16:02.893235] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:10.714 TLSTESTn1 00:24:10.715 20:16:02 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:10.715 Running I/O for 10 seconds... 00:24:20.711 00:24:20.711 Latency(us) 00:24:20.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.711 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:20.711 Verification LBA range: start 0x0 length 0x2000 00:24:20.711 TLSTESTn1 : 10.03 3325.54 12.99 0.00 0.00 38430.00 4724.05 97867.09 00:24:20.711 =================================================================================================================== 00:24:20.711 Total : 3325.54 12.99 0.00 0.00 38430.00 4724.05 97867.09 00:24:20.711 0 00:24:20.711 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:20.711 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 100431 00:24:20.711 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100431 ']' 00:24:20.711 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100431 00:24:20.711 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:20.711 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:20.711 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100431 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100431' 00:24:20.971 killing process with pid 100431 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100431 00:24:20.971 Received shutdown signal, test time was about 10.000000 seconds 00:24:20.971 00:24:20.971 Latency(us) 00:24:20.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.971 =================================================================================================================== 00:24:20.971 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.971 [2024-05-15 20:16:13.214734] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100431 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.QHjGbPPhxX 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QHjGbPPhxX 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QHjGbPPhxX 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QHjGbPPhxX 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QHjGbPPhxX' 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=102446 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 102446 /var/tmp/bdevperf.sock 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 102446 ']' 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:20.971 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.971 [2024-05-15 20:16:13.381143] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:20.971 [2024-05-15 20:16:13.381196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102446 ] 00:24:20.971 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.971 [2024-05-15 20:16:13.437201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.234 [2024-05-15 20:16:13.487786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.234 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:21.234 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:21.234 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QHjGbPPhxX 00:24:21.495 [2024-05-15 20:16:13.755386] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.495 [2024-05-15 20:16:13.755436] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:21.495 [2024-05-15 20:16:13.755441] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.QHjGbPPhxX 00:24:21.495 request: 00:24:21.495 { 00:24:21.495 "name": "TLSTEST", 00:24:21.495 "trtype": "tcp", 00:24:21.495 "traddr": "10.0.0.2", 00:24:21.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.495 "adrfam": "ipv4", 00:24:21.495 "trsvcid": "4420", 00:24:21.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.495 "psk": "/tmp/tmp.QHjGbPPhxX", 00:24:21.495 "method": "bdev_nvme_attach_controller", 00:24:21.495 "req_id": 1 00:24:21.495 } 00:24:21.495 Got JSON-RPC error response 00:24:21.495 response: 00:24:21.495 { 00:24:21.495 "code": -1, 00:24:21.495 "message": "Operation not permitted" 00:24:21.495 } 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 102446 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 102446 ']' 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 102446 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102446 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102446' 00:24:21.495 killing process with pid 102446 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 102446 00:24:21.495 Received shutdown signal, test time was about 10.000000 seconds 00:24:21.495 00:24:21.495 Latency(us) 00:24:21.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.495 =================================================================================================================== 00:24:21.495 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 102446 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 100080 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100080 ']' 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100080 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100080 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100080' 00:24:21.495 killing process with pid 100080 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100080 00:24:21.495 [2024-05-15 20:16:13.995511] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:21.495 [2024-05-15 20:16:13.995550] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:21.495 20:16:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100080 00:24:21.756 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:21.756 20:16:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:21.756 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:21.756 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.756 20:16:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=102683 00:24:21.756 20:16:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 102683 00:24:21.756 20:16:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:21.756 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 102683 ']' 00:24:21.756 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.757 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:21.757 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.757 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:21.757 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.757 [2024-05-15 20:16:14.192328] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:21.757 [2024-05-15 20:16:14.192384] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.757 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.018 [2024-05-15 20:16:14.264429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.018 [2024-05-15 20:16:14.329849] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.018 [2024-05-15 20:16:14.329887] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.018 [2024-05-15 20:16:14.329895] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.018 [2024-05-15 20:16:14.329901] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.018 [2024-05-15 20:16:14.329907] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.018 [2024-05-15 20:16:14.329929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.QHjGbPPhxX 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.QHjGbPPhxX 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.QHjGbPPhxX 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QHjGbPPhxX 00:24:22.018 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:22.279 [2024-05-15 20:16:14.603045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.279 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:22.540 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:22.540 [2024-05-15 20:16:14.979969] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:22.540 [2024-05-15 20:16:14.980015] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:22.540 [2024-05-15 20:16:14.980197] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.540 20:16:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:22.800 malloc0 00:24:22.800 20:16:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:23.061 20:16:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QHjGbPPhxX 00:24:23.061 [2024-05-15 20:16:15.543981] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:23.061 [2024-05-15 20:16:15.544005] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:23.061 [2024-05-15 20:16:15.544031] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:23.061 request: 00:24:23.061 { 00:24:23.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.061 "host": "nqn.2016-06.io.spdk:host1", 00:24:23.061 "psk": "/tmp/tmp.QHjGbPPhxX", 00:24:23.061 "method": "nvmf_subsystem_add_host", 00:24:23.061 "req_id": 1 00:24:23.061 } 00:24:23.061 Got JSON-RPC error response 00:24:23.061 response: 00:24:23.061 { 00:24:23.061 "code": -32603, 00:24:23.061 "message": "Internal error" 00:24:23.061 } 00:24:23.061 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:23.061 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:23.061 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:23.061 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:23.061 20:16:15 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 102683 00:24:23.061 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 102683 ']' 00:24:23.061 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 102683 00:24:23.061 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102683 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102683' 00:24:23.322 killing process with pid 102683 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 102683 00:24:23.322 [2024-05-15 20:16:15.617670] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 102683 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.QHjGbPPhxX 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=102987 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 102987 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 102987 ']' 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:23.322 20:16:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.322 [2024-05-15 20:16:15.814287] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:23.322 [2024-05-15 20:16:15.814353] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.583 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.583 [2024-05-15 20:16:15.888844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.583 [2024-05-15 20:16:15.953283] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.583 [2024-05-15 20:16:15.953325] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.583 [2024-05-15 20:16:15.953333] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.583 [2024-05-15 20:16:15.953339] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.583 [2024-05-15 20:16:15.953345] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.583 [2024-05-15 20:16:15.953365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.583 20:16:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:23.583 20:16:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:23.583 20:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:23.583 20:16:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:23.583 20:16:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.583 20:16:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.583 20:16:16 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.QHjGbPPhxX 00:24:23.583 20:16:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QHjGbPPhxX 00:24:23.583 20:16:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:23.843 [2024-05-15 20:16:16.210465] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.843 20:16:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:24.105 20:16:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:24.105 [2024-05-15 20:16:16.599426] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:24.105 [2024-05-15 20:16:16.599474] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:24.105 [2024-05-15 20:16:16.599645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.367 20:16:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:24.367 malloc0 00:24:24.367 20:16:16 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:24.629 20:16:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QHjGbPPhxX 00:24:24.891 [2024-05-15 20:16:17.187527] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:24.891 20:16:17 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:24.891 20:16:17 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=103214 00:24:24.891 20:16:17 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:24.891 20:16:17 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 103214 /var/tmp/bdevperf.sock 00:24:24.891 20:16:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 103214 ']' 00:24:24.891 20:16:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.891 20:16:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:24.891 20:16:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.891 20:16:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:24.891 20:16:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.891 [2024-05-15 20:16:17.229942] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:24.891 [2024-05-15 20:16:17.229991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103214 ] 00:24:24.891 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.891 [2024-05-15 20:16:17.285911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.891 [2024-05-15 20:16:17.337626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:25.152 20:16:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:25.152 20:16:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:25.152 20:16:17 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QHjGbPPhxX 00:24:25.152 [2024-05-15 20:16:17.589130] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:25.152 [2024-05-15 20:16:17.589198] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:25.413 TLSTESTn1 00:24:25.413 20:16:17 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:25.674 20:16:17 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:24:25.674 "subsystems": [ 00:24:25.674 { 00:24:25.674 "subsystem": "keyring", 00:24:25.674 "config": [] 00:24:25.674 }, 00:24:25.674 { 00:24:25.674 "subsystem": "iobuf", 00:24:25.674 "config": [ 00:24:25.674 { 00:24:25.674 "method": "iobuf_set_options", 00:24:25.674 "params": { 00:24:25.674 "small_pool_count": 8192, 00:24:25.674 "large_pool_count": 1024, 00:24:25.674 "small_bufsize": 8192, 00:24:25.674 "large_bufsize": 135168 00:24:25.674 } 00:24:25.674 } 00:24:25.674 ] 00:24:25.674 }, 00:24:25.674 { 00:24:25.674 "subsystem": "sock", 00:24:25.674 "config": [ 00:24:25.674 { 00:24:25.674 "method": "sock_impl_set_options", 00:24:25.674 "params": { 00:24:25.674 "impl_name": "posix", 00:24:25.674 "recv_buf_size": 2097152, 00:24:25.674 "send_buf_size": 2097152, 00:24:25.674 "enable_recv_pipe": true, 00:24:25.674 "enable_quickack": false, 00:24:25.674 "enable_placement_id": 0, 00:24:25.674 "enable_zerocopy_send_server": true, 00:24:25.674 "enable_zerocopy_send_client": false, 00:24:25.674 "zerocopy_threshold": 0, 00:24:25.674 "tls_version": 0, 00:24:25.674 "enable_ktls": false 00:24:25.674 } 00:24:25.674 }, 00:24:25.674 { 00:24:25.674 "method": "sock_impl_set_options", 00:24:25.674 "params": { 00:24:25.674 "impl_name": "ssl", 00:24:25.674 "recv_buf_size": 4096, 00:24:25.674 "send_buf_size": 4096, 00:24:25.674 "enable_recv_pipe": true, 00:24:25.674 "enable_quickack": false, 00:24:25.674 "enable_placement_id": 0, 00:24:25.674 "enable_zerocopy_send_server": true, 00:24:25.674 "enable_zerocopy_send_client": false, 00:24:25.674 "zerocopy_threshold": 0, 00:24:25.674 "tls_version": 0, 00:24:25.674 "enable_ktls": false 00:24:25.674 } 00:24:25.674 } 00:24:25.674 ] 00:24:25.674 }, 00:24:25.674 { 00:24:25.674 "subsystem": "vmd", 00:24:25.674 "config": [] 00:24:25.674 }, 00:24:25.674 { 00:24:25.674 "subsystem": "accel", 00:24:25.674 "config": [ 00:24:25.674 { 00:24:25.674 "method": "accel_set_options", 00:24:25.674 "params": { 00:24:25.674 "small_cache_size": 128, 00:24:25.674 "large_cache_size": 16, 00:24:25.674 "task_count": 2048, 00:24:25.674 "sequence_count": 2048, 00:24:25.674 "buf_count": 2048 00:24:25.674 } 00:24:25.674 } 00:24:25.674 ] 00:24:25.674 }, 00:24:25.674 { 00:24:25.674 "subsystem": "bdev", 00:24:25.674 "config": [ 00:24:25.674 { 00:24:25.674 "method": "bdev_set_options", 00:24:25.674 "params": { 00:24:25.674 "bdev_io_pool_size": 65535, 00:24:25.674 "bdev_io_cache_size": 256, 00:24:25.674 "bdev_auto_examine": true, 00:24:25.674 "iobuf_small_cache_size": 128, 00:24:25.674 "iobuf_large_cache_size": 16 00:24:25.674 } 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "method": "bdev_raid_set_options", 00:24:25.675 "params": { 00:24:25.675 "process_window_size_kb": 1024 00:24:25.675 } 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "method": "bdev_iscsi_set_options", 00:24:25.675 "params": { 00:24:25.675 "timeout_sec": 30 00:24:25.675 } 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "method": "bdev_nvme_set_options", 00:24:25.675 "params": { 00:24:25.675 "action_on_timeout": "none", 00:24:25.675 "timeout_us": 0, 00:24:25.675 "timeout_admin_us": 0, 00:24:25.675 "keep_alive_timeout_ms": 10000, 00:24:25.675 "arbitration_burst": 0, 00:24:25.675 "low_priority_weight": 0, 00:24:25.675 "medium_priority_weight": 0, 00:24:25.675 "high_priority_weight": 0, 00:24:25.675 "nvme_adminq_poll_period_us": 10000, 00:24:25.675 "nvme_ioq_poll_period_us": 0, 00:24:25.675 "io_queue_requests": 0, 00:24:25.675 "delay_cmd_submit": true, 00:24:25.675 "transport_retry_count": 4, 00:24:25.675 "bdev_retry_count": 3, 00:24:25.675 "transport_ack_timeout": 0, 00:24:25.675 "ctrlr_loss_timeout_sec": 0, 00:24:25.675 "reconnect_delay_sec": 0, 00:24:25.675 "fast_io_fail_timeout_sec": 0, 00:24:25.675 "disable_auto_failback": false, 00:24:25.675 "generate_uuids": false, 00:24:25.675 "transport_tos": 0, 00:24:25.675 "nvme_error_stat": false, 00:24:25.675 "rdma_srq_size": 0, 00:24:25.675 "io_path_stat": false, 00:24:25.675 "allow_accel_sequence": false, 00:24:25.675 "rdma_max_cq_size": 0, 00:24:25.675 "rdma_cm_event_timeout_ms": 0, 00:24:25.675 "dhchap_digests": [ 00:24:25.675 "sha256", 00:24:25.675 "sha384", 00:24:25.675 "sha512" 00:24:25.675 ], 00:24:25.675 "dhchap_dhgroups": [ 00:24:25.675 "null", 00:24:25.675 "ffdhe2048", 00:24:25.675 "ffdhe3072", 00:24:25.675 "ffdhe4096", 00:24:25.675 "ffdhe6144", 00:24:25.675 "ffdhe8192" 00:24:25.675 ] 00:24:25.675 } 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "method": "bdev_nvme_set_hotplug", 00:24:25.675 "params": { 00:24:25.675 "period_us": 100000, 00:24:25.675 "enable": false 00:24:25.675 } 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "method": "bdev_malloc_create", 00:24:25.675 "params": { 00:24:25.675 "name": "malloc0", 00:24:25.675 "num_blocks": 8192, 00:24:25.675 "block_size": 4096, 00:24:25.675 "physical_block_size": 4096, 00:24:25.675 "uuid": "94e43645-4af5-4b5f-bd77-7487eb345fa2", 00:24:25.675 "optimal_io_boundary": 0 00:24:25.675 } 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "method": "bdev_wait_for_examine" 00:24:25.675 } 00:24:25.675 ] 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "subsystem": "nbd", 00:24:25.675 "config": [] 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "subsystem": "scheduler", 00:24:25.675 "config": [ 00:24:25.675 { 00:24:25.675 "method": "framework_set_scheduler", 00:24:25.675 "params": { 00:24:25.675 "name": "static" 00:24:25.675 } 00:24:25.675 } 00:24:25.675 ] 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "subsystem": "nvmf", 00:24:25.675 "config": [ 00:24:25.675 { 00:24:25.675 "method": "nvmf_set_config", 00:24:25.675 "params": { 00:24:25.675 "discovery_filter": "match_any", 00:24:25.675 "admin_cmd_passthru": { 00:24:25.675 "identify_ctrlr": false 00:24:25.675 } 00:24:25.675 } 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "method": "nvmf_set_max_subsystems", 00:24:25.675 "params": { 00:24:25.675 "max_subsystems": 1024 00:24:25.675 } 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "method": "nvmf_set_crdt", 00:24:25.675 "params": { 00:24:25.675 "crdt1": 0, 00:24:25.675 "crdt2": 0, 00:24:25.675 "crdt3": 0 00:24:25.675 } 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "method": "nvmf_create_transport", 00:24:25.675 "params": { 00:24:25.675 "trtype": "TCP", 00:24:25.675 "max_queue_depth": 128, 00:24:25.675 "max_io_qpairs_per_ctrlr": 127, 00:24:25.675 "in_capsule_data_size": 4096, 00:24:25.675 "max_io_size": 131072, 00:24:25.675 "io_unit_size": 131072, 00:24:25.675 "max_aq_depth": 128, 00:24:25.675 "num_shared_buffers": 511, 00:24:25.675 "buf_cache_size": 4294967295, 00:24:25.675 "dif_insert_or_strip": false, 00:24:25.675 "zcopy": false, 00:24:25.675 "c2h_success": false, 00:24:25.675 "sock_priority": 0, 00:24:25.675 "abort_timeout_sec": 1, 00:24:25.675 "ack_timeout": 0, 00:24:25.675 "data_wr_pool_size": 0 00:24:25.675 } 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "method": "nvmf_create_subsystem", 00:24:25.675 "params": { 00:24:25.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.675 "allow_any_host": false, 00:24:25.675 "serial_number": "SPDK00000000000001", 00:24:25.675 "model_number": "SPDK bdev Controller", 00:24:25.675 "max_namespaces": 10, 00:24:25.675 "min_cntlid": 1, 00:24:25.675 "max_cntlid": 65519, 00:24:25.675 "ana_reporting": false 00:24:25.675 } 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "method": "nvmf_subsystem_add_host", 00:24:25.675 "params": { 00:24:25.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.675 "host": "nqn.2016-06.io.spdk:host1", 00:24:25.675 "psk": "/tmp/tmp.QHjGbPPhxX" 00:24:25.675 } 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "method": "nvmf_subsystem_add_ns", 00:24:25.675 "params": { 00:24:25.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.675 "namespace": { 00:24:25.675 "nsid": 1, 00:24:25.675 "bdev_name": "malloc0", 00:24:25.675 "nguid": "94E436454AF54B5FBD777487EB345FA2", 00:24:25.675 "uuid": "94e43645-4af5-4b5f-bd77-7487eb345fa2", 00:24:25.675 "no_auto_visible": false 00:24:25.675 } 00:24:25.675 } 00:24:25.675 }, 00:24:25.675 { 00:24:25.675 "method": "nvmf_subsystem_add_listener", 00:24:25.675 "params": { 00:24:25.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.675 "listen_address": { 00:24:25.675 "trtype": "TCP", 00:24:25.675 "adrfam": "IPv4", 00:24:25.675 "traddr": "10.0.0.2", 00:24:25.675 "trsvcid": "4420" 00:24:25.675 }, 00:24:25.675 "secure_channel": true 00:24:25.675 } 00:24:25.675 } 00:24:25.675 ] 00:24:25.675 } 00:24:25.675 ] 00:24:25.675 }' 00:24:25.675 20:16:17 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:25.936 20:16:18 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:25.936 "subsystems": [ 00:24:25.936 { 00:24:25.936 "subsystem": "keyring", 00:24:25.936 "config": [] 00:24:25.936 }, 00:24:25.936 { 00:24:25.936 "subsystem": "iobuf", 00:24:25.936 "config": [ 00:24:25.936 { 00:24:25.936 "method": "iobuf_set_options", 00:24:25.936 "params": { 00:24:25.936 "small_pool_count": 8192, 00:24:25.936 "large_pool_count": 1024, 00:24:25.936 "small_bufsize": 8192, 00:24:25.937 "large_bufsize": 135168 00:24:25.937 } 00:24:25.937 } 00:24:25.937 ] 00:24:25.937 }, 00:24:25.937 { 00:24:25.937 "subsystem": "sock", 00:24:25.937 "config": [ 00:24:25.937 { 00:24:25.937 "method": "sock_impl_set_options", 00:24:25.937 "params": { 00:24:25.937 "impl_name": "posix", 00:24:25.937 "recv_buf_size": 2097152, 00:24:25.937 "send_buf_size": 2097152, 00:24:25.937 "enable_recv_pipe": true, 00:24:25.937 "enable_quickack": false, 00:24:25.937 "enable_placement_id": 0, 00:24:25.937 "enable_zerocopy_send_server": true, 00:24:25.937 "enable_zerocopy_send_client": false, 00:24:25.937 "zerocopy_threshold": 0, 00:24:25.937 "tls_version": 0, 00:24:25.937 "enable_ktls": false 00:24:25.937 } 00:24:25.937 }, 00:24:25.937 { 00:24:25.937 "method": "sock_impl_set_options", 00:24:25.937 "params": { 00:24:25.937 "impl_name": "ssl", 00:24:25.937 "recv_buf_size": 4096, 00:24:25.937 "send_buf_size": 4096, 00:24:25.937 "enable_recv_pipe": true, 00:24:25.937 "enable_quickack": false, 00:24:25.937 "enable_placement_id": 0, 00:24:25.937 "enable_zerocopy_send_server": true, 00:24:25.937 "enable_zerocopy_send_client": false, 00:24:25.937 "zerocopy_threshold": 0, 00:24:25.937 "tls_version": 0, 00:24:25.937 "enable_ktls": false 00:24:25.937 } 00:24:25.937 } 00:24:25.937 ] 00:24:25.937 }, 00:24:25.937 { 00:24:25.937 "subsystem": "vmd", 00:24:25.937 "config": [] 00:24:25.937 }, 00:24:25.937 { 00:24:25.937 "subsystem": "accel", 00:24:25.937 "config": [ 00:24:25.937 { 00:24:25.937 "method": "accel_set_options", 00:24:25.937 "params": { 00:24:25.937 "small_cache_size": 128, 00:24:25.937 "large_cache_size": 16, 00:24:25.937 "task_count": 2048, 00:24:25.937 "sequence_count": 2048, 00:24:25.937 "buf_count": 2048 00:24:25.937 } 00:24:25.937 } 00:24:25.937 ] 00:24:25.937 }, 00:24:25.937 { 00:24:25.937 "subsystem": "bdev", 00:24:25.937 "config": [ 00:24:25.937 { 00:24:25.937 "method": "bdev_set_options", 00:24:25.937 "params": { 00:24:25.937 "bdev_io_pool_size": 65535, 00:24:25.937 "bdev_io_cache_size": 256, 00:24:25.937 "bdev_auto_examine": true, 00:24:25.937 "iobuf_small_cache_size": 128, 00:24:25.937 "iobuf_large_cache_size": 16 00:24:25.937 } 00:24:25.937 }, 00:24:25.937 { 00:24:25.937 "method": "bdev_raid_set_options", 00:24:25.937 "params": { 00:24:25.937 "process_window_size_kb": 1024 00:24:25.937 } 00:24:25.937 }, 00:24:25.937 { 00:24:25.937 "method": "bdev_iscsi_set_options", 00:24:25.937 "params": { 00:24:25.937 "timeout_sec": 30 00:24:25.937 } 00:24:25.937 }, 00:24:25.937 { 00:24:25.937 "method": "bdev_nvme_set_options", 00:24:25.937 "params": { 00:24:25.937 "action_on_timeout": "none", 00:24:25.937 "timeout_us": 0, 00:24:25.937 "timeout_admin_us": 0, 00:24:25.937 "keep_alive_timeout_ms": 10000, 00:24:25.937 "arbitration_burst": 0, 00:24:25.937 "low_priority_weight": 0, 00:24:25.937 "medium_priority_weight": 0, 00:24:25.937 "high_priority_weight": 0, 00:24:25.937 "nvme_adminq_poll_period_us": 10000, 00:24:25.937 "nvme_ioq_poll_period_us": 0, 00:24:25.937 "io_queue_requests": 512, 00:24:25.937 "delay_cmd_submit": true, 00:24:25.937 "transport_retry_count": 4, 00:24:25.937 "bdev_retry_count": 3, 00:24:25.937 "transport_ack_timeout": 0, 00:24:25.937 "ctrlr_loss_timeout_sec": 0, 00:24:25.937 "reconnect_delay_sec": 0, 00:24:25.937 "fast_io_fail_timeout_sec": 0, 00:24:25.937 "disable_auto_failback": false, 00:24:25.937 "generate_uuids": false, 00:24:25.937 "transport_tos": 0, 00:24:25.937 "nvme_error_stat": false, 00:24:25.937 "rdma_srq_size": 0, 00:24:25.937 "io_path_stat": false, 00:24:25.937 "allow_accel_sequence": false, 00:24:25.937 "rdma_max_cq_size": 0, 00:24:25.937 "rdma_cm_event_timeout_ms": 0, 00:24:25.937 "dhchap_digests": [ 00:24:25.937 "sha256", 00:24:25.937 "sha384", 00:24:25.937 "sha512" 00:24:25.937 ], 00:24:25.937 "dhchap_dhgroups": [ 00:24:25.937 "null", 00:24:25.937 "ffdhe2048", 00:24:25.937 "ffdhe3072", 00:24:25.937 "ffdhe4096", 00:24:25.937 "ffdhe6144", 00:24:25.937 "ffdhe8192" 00:24:25.937 ] 00:24:25.937 } 00:24:25.937 }, 00:24:25.937 { 00:24:25.937 "method": "bdev_nvme_attach_controller", 00:24:25.937 "params": { 00:24:25.937 "name": "TLSTEST", 00:24:25.937 "trtype": "TCP", 00:24:25.937 "adrfam": "IPv4", 00:24:25.937 "traddr": "10.0.0.2", 00:24:25.937 "trsvcid": "4420", 00:24:25.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.937 "prchk_reftag": false, 00:24:25.937 "prchk_guard": false, 00:24:25.937 "ctrlr_loss_timeout_sec": 0, 00:24:25.937 "reconnect_delay_sec": 0, 00:24:25.937 "fast_io_fail_timeout_sec": 0, 00:24:25.937 "psk": "/tmp/tmp.QHjGbPPhxX", 00:24:25.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:25.937 "hdgst": false, 00:24:25.937 "ddgst": false 00:24:25.937 } 00:24:25.937 }, 00:24:25.937 { 00:24:25.937 "method": "bdev_nvme_set_hotplug", 00:24:25.937 "params": { 00:24:25.937 "period_us": 100000, 00:24:25.937 "enable": false 00:24:25.937 } 00:24:25.937 }, 00:24:25.937 { 00:24:25.937 "method": "bdev_wait_for_examine" 00:24:25.937 } 00:24:25.937 ] 00:24:25.937 }, 00:24:25.937 { 00:24:25.937 "subsystem": "nbd", 00:24:25.937 "config": [] 00:24:25.937 } 00:24:25.937 ] 00:24:25.937 }' 00:24:25.937 20:16:18 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 103214 00:24:25.937 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 103214 ']' 00:24:25.937 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 103214 00:24:25.937 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:25.937 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:25.937 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103214 00:24:25.937 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:25.937 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:25.937 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103214' 00:24:25.937 killing process with pid 103214 00:24:25.937 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 103214 00:24:25.937 Received shutdown signal, test time was about 10.000000 seconds 00:24:25.937 00:24:25.937 Latency(us) 00:24:25.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.937 =================================================================================================================== 00:24:25.937 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:25.937 [2024-05-15 20:16:18.320472] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:25.937 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 103214 00:24:25.937 20:16:18 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 102987 00:24:25.937 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 102987 ']' 00:24:25.937 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 102987 00:24:25.937 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:26.198 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:26.198 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102987 00:24:26.198 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:26.198 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:26.198 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102987' 00:24:26.198 killing process with pid 102987 00:24:26.198 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 102987 00:24:26.198 [2024-05-15 20:16:18.488777] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:26.198 [2024-05-15 20:16:18.488815] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:26.198 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 102987 00:24:26.198 20:16:18 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:26.198 20:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:26.198 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:26.198 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.198 20:16:18 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:24:26.198 "subsystems": [ 00:24:26.198 { 00:24:26.198 "subsystem": "keyring", 00:24:26.198 "config": [] 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "subsystem": "iobuf", 00:24:26.199 "config": [ 00:24:26.199 { 00:24:26.199 "method": "iobuf_set_options", 00:24:26.199 "params": { 00:24:26.199 "small_pool_count": 8192, 00:24:26.199 "large_pool_count": 1024, 00:24:26.199 "small_bufsize": 8192, 00:24:26.199 "large_bufsize": 135168 00:24:26.199 } 00:24:26.199 } 00:24:26.199 ] 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "subsystem": "sock", 00:24:26.199 "config": [ 00:24:26.199 { 00:24:26.199 "method": "sock_impl_set_options", 00:24:26.199 "params": { 00:24:26.199 "impl_name": "posix", 00:24:26.199 "recv_buf_size": 2097152, 00:24:26.199 "send_buf_size": 2097152, 00:24:26.199 "enable_recv_pipe": true, 00:24:26.199 "enable_quickack": false, 00:24:26.199 "enable_placement_id": 0, 00:24:26.199 "enable_zerocopy_send_server": true, 00:24:26.199 "enable_zerocopy_send_client": false, 00:24:26.199 "zerocopy_threshold": 0, 00:24:26.199 "tls_version": 0, 00:24:26.199 "enable_ktls": false 00:24:26.199 } 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "method": "sock_impl_set_options", 00:24:26.199 "params": { 00:24:26.199 "impl_name": "ssl", 00:24:26.199 "recv_buf_size": 4096, 00:24:26.199 "send_buf_size": 4096, 00:24:26.199 "enable_recv_pipe": true, 00:24:26.199 "enable_quickack": false, 00:24:26.199 "enable_placement_id": 0, 00:24:26.199 "enable_zerocopy_send_server": true, 00:24:26.199 "enable_zerocopy_send_client": false, 00:24:26.199 "zerocopy_threshold": 0, 00:24:26.199 "tls_version": 0, 00:24:26.199 "enable_ktls": false 00:24:26.199 } 00:24:26.199 } 00:24:26.199 ] 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "subsystem": "vmd", 00:24:26.199 "config": [] 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "subsystem": "accel", 00:24:26.199 "config": [ 00:24:26.199 { 00:24:26.199 "method": "accel_set_options", 00:24:26.199 "params": { 00:24:26.199 "small_cache_size": 128, 00:24:26.199 "large_cache_size": 16, 00:24:26.199 "task_count": 2048, 00:24:26.199 "sequence_count": 2048, 00:24:26.199 "buf_count": 2048 00:24:26.199 } 00:24:26.199 } 00:24:26.199 ] 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "subsystem": "bdev", 00:24:26.199 "config": [ 00:24:26.199 { 00:24:26.199 "method": "bdev_set_options", 00:24:26.199 "params": { 00:24:26.199 "bdev_io_pool_size": 65535, 00:24:26.199 "bdev_io_cache_size": 256, 00:24:26.199 "bdev_auto_examine": true, 00:24:26.199 "iobuf_small_cache_size": 128, 00:24:26.199 "iobuf_large_cache_size": 16 00:24:26.199 } 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "method": "bdev_raid_set_options", 00:24:26.199 "params": { 00:24:26.199 "process_window_size_kb": 1024 00:24:26.199 } 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "method": "bdev_iscsi_set_options", 00:24:26.199 "params": { 00:24:26.199 "timeout_sec": 30 00:24:26.199 } 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "method": "bdev_nvme_set_options", 00:24:26.199 "params": { 00:24:26.199 "action_on_timeout": "none", 00:24:26.199 "timeout_us": 0, 00:24:26.199 "timeout_admin_us": 0, 00:24:26.199 "keep_alive_timeout_ms": 10000, 00:24:26.199 "arbitration_burst": 0, 00:24:26.199 "low_priority_weight": 0, 00:24:26.199 "medium_priority_weight": 0, 00:24:26.199 "high_priority_weight": 0, 00:24:26.199 "nvme_adminq_poll_period_us": 10000, 00:24:26.199 "nvme_ioq_poll_period_us": 0, 00:24:26.199 "io_queue_requests": 0, 00:24:26.199 "delay_cmd_submit": true, 00:24:26.199 "transport_retry_count": 4, 00:24:26.199 "bdev_retry_count": 3, 00:24:26.199 "transport_ack_timeout": 0, 00:24:26.199 "ctrlr_loss_timeout_sec": 0, 00:24:26.199 "reconnect_delay_sec": 0, 00:24:26.199 "fast_io_fail_timeout_sec": 0, 00:24:26.199 "disable_auto_failback": false, 00:24:26.199 "generate_uuids": false, 00:24:26.199 "transport_tos": 0, 00:24:26.199 "nvme_error_stat": false, 00:24:26.199 "rdma_srq_size": 0, 00:24:26.199 "io_path_stat": false, 00:24:26.199 "allow_accel_sequence": false, 00:24:26.199 "rdma_max_cq_size": 0, 00:24:26.199 "rdma_cm_event_timeout_ms": 0, 00:24:26.199 "dhchap_digests": [ 00:24:26.199 "sha256", 00:24:26.199 "sha384", 00:24:26.199 "sha512" 00:24:26.199 ], 00:24:26.199 "dhchap_dhgroups": [ 00:24:26.199 "null", 00:24:26.199 "ffdhe2048", 00:24:26.199 "ffdhe3072", 00:24:26.199 "ffdhe4096", 00:24:26.199 "ffdhe6144", 00:24:26.199 "ffdhe8192" 00:24:26.199 ] 00:24:26.199 } 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "method": "bdev_nvme_set_hotplug", 00:24:26.199 "params": { 00:24:26.199 "period_us": 100000, 00:24:26.199 "enable": false 00:24:26.199 } 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "method": "bdev_malloc_create", 00:24:26.199 "params": { 00:24:26.199 "name": "malloc0", 00:24:26.199 "num_blocks": 8192, 00:24:26.199 "block_size": 4096, 00:24:26.199 "physical_block_size": 4096, 00:24:26.199 "uuid": "94e43645-4af5-4b5f-bd77-7487eb345fa2", 00:24:26.199 "optimal_io_boundary": 0 00:24:26.199 } 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "method": "bdev_wait_for_examine" 00:24:26.199 } 00:24:26.199 ] 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "subsystem": "nbd", 00:24:26.199 "config": [] 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "subsystem": "scheduler", 00:24:26.199 "config": [ 00:24:26.199 { 00:24:26.199 "method": "framework_set_scheduler", 00:24:26.199 "params": { 00:24:26.199 "name": "static" 00:24:26.199 } 00:24:26.199 } 00:24:26.199 ] 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "subsystem": "nvmf", 00:24:26.199 "config": [ 00:24:26.199 { 00:24:26.199 "method": "nvmf_set_config", 00:24:26.199 "params": { 00:24:26.199 "discovery_filter": "match_any", 00:24:26.199 "admin_cmd_passthru": { 00:24:26.199 "identify_ctrlr": false 00:24:26.199 } 00:24:26.199 } 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "method": "nvmf_set_max_subsystems", 00:24:26.199 "params": { 00:24:26.199 "max_subsystems": 1024 00:24:26.199 } 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "method": "nvmf_set_crdt", 00:24:26.199 "params": { 00:24:26.199 "crdt1": 0, 00:24:26.199 "crdt2": 0, 00:24:26.199 "crdt3": 0 00:24:26.199 } 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "method": "nvmf_create_transport", 00:24:26.199 "params": { 00:24:26.199 "trtype": "TCP", 00:24:26.199 "max_queue_depth": 128, 00:24:26.199 "max_io_qpairs_per_ctrlr": 127, 00:24:26.199 "in_capsule_data_size": 4096, 00:24:26.199 "max_io_size": 131072, 00:24:26.199 "io_unit_size": 131072, 00:24:26.199 "max_aq_depth": 128, 00:24:26.199 "num_shared_buffers": 511, 00:24:26.199 "buf_cache_size": 4294967295, 00:24:26.199 "dif_insert_or_strip": false, 00:24:26.199 "zcopy": false, 00:24:26.199 "c2h_success": false, 00:24:26.199 "sock_priority": 0, 00:24:26.199 "abort_timeout_sec": 1, 00:24:26.199 "ack_timeout": 0, 00:24:26.199 "data_wr_pool_size": 0 00:24:26.199 } 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "method": "nvmf_create_subsystem", 00:24:26.199 "params": { 00:24:26.199 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.199 "allow_any_host": false, 00:24:26.199 "serial_number": "SPDK00000000000001", 00:24:26.199 "model_number": "SPDK bdev Controller", 00:24:26.199 "max_namespaces": 10, 00:24:26.199 "min_cntlid": 1, 00:24:26.199 "max_cntlid": 65519, 00:24:26.199 "ana_reporting": false 00:24:26.199 } 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "method": "nvmf_subsystem_add_host", 00:24:26.199 "params": { 00:24:26.199 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.199 "host": "nqn.2016-06.io.spdk:host1", 00:24:26.199 "psk": "/tmp/tmp.QHjGbPPhxX" 00:24:26.199 } 00:24:26.199 }, 00:24:26.199 { 00:24:26.199 "method": "nvmf_subsystem_add_ns", 00:24:26.199 "params": { 00:24:26.199 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.199 "namespace": { 00:24:26.199 "nsid": 1, 00:24:26.200 "bdev_name": "malloc0", 00:24:26.200 "nguid": "94E436454AF54B5FBD777487EB345FA2", 00:24:26.200 "uuid": "94e43645-4af5-4b5f-bd77-7487eb345fa2", 00:24:26.200 "no_auto_visible": false 00:24:26.200 } 00:24:26.200 } 00:24:26.200 }, 00:24:26.200 { 00:24:26.200 "method": "nvmf_subsystem_add_listener", 00:24:26.200 "params": { 00:24:26.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.200 "listen_address": { 00:24:26.200 "trtype": "TCP", 00:24:26.200 "adrfam": "IPv4", 00:24:26.200 "traddr": "10.0.0.2", 00:24:26.200 "trsvcid": "4420" 00:24:26.200 }, 00:24:26.200 "secure_channel": true 00:24:26.200 } 00:24:26.200 } 00:24:26.200 ] 00:24:26.200 } 00:24:26.200 ] 00:24:26.200 }' 00:24:26.200 20:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=103538 00:24:26.200 20:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 103538 00:24:26.200 20:16:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:26.200 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 103538 ']' 00:24:26.200 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.200 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:26.200 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.200 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:26.200 20:16:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.200 [2024-05-15 20:16:18.684359] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:26.200 [2024-05-15 20:16:18.684435] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.461 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.461 [2024-05-15 20:16:18.760598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.461 [2024-05-15 20:16:18.824889] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.461 [2024-05-15 20:16:18.824926] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.461 [2024-05-15 20:16:18.824933] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.461 [2024-05-15 20:16:18.824940] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.461 [2024-05-15 20:16:18.824946] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.461 [2024-05-15 20:16:18.825007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.721 [2024-05-15 20:16:19.006224] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.721 [2024-05-15 20:16:19.022170] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:26.721 [2024-05-15 20:16:19.038207] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:26.721 [2024-05-15 20:16:19.038250] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:26.721 [2024-05-15 20:16:19.056481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=103869 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 103869 /var/tmp/bdevperf.sock 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 103869 ']' 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.294 20:16:19 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:24:27.294 "subsystems": [ 00:24:27.294 { 00:24:27.294 "subsystem": "keyring", 00:24:27.294 "config": [] 00:24:27.294 }, 00:24:27.294 { 00:24:27.294 "subsystem": "iobuf", 00:24:27.294 "config": [ 00:24:27.294 { 00:24:27.294 "method": "iobuf_set_options", 00:24:27.294 "params": { 00:24:27.294 "small_pool_count": 8192, 00:24:27.294 "large_pool_count": 1024, 00:24:27.294 "small_bufsize": 8192, 00:24:27.294 "large_bufsize": 135168 00:24:27.294 } 00:24:27.294 } 00:24:27.294 ] 00:24:27.294 }, 00:24:27.294 { 00:24:27.294 "subsystem": "sock", 00:24:27.294 "config": [ 00:24:27.294 { 00:24:27.294 "method": "sock_impl_set_options", 00:24:27.294 "params": { 00:24:27.294 "impl_name": "posix", 00:24:27.294 "recv_buf_size": 2097152, 00:24:27.294 "send_buf_size": 2097152, 00:24:27.294 "enable_recv_pipe": true, 00:24:27.294 "enable_quickack": false, 00:24:27.294 "enable_placement_id": 0, 00:24:27.294 "enable_zerocopy_send_server": true, 00:24:27.294 "enable_zerocopy_send_client": false, 00:24:27.294 "zerocopy_threshold": 0, 00:24:27.294 "tls_version": 0, 00:24:27.294 "enable_ktls": false 00:24:27.294 } 00:24:27.294 }, 00:24:27.294 { 00:24:27.294 "method": "sock_impl_set_options", 00:24:27.294 "params": { 00:24:27.294 "impl_name": "ssl", 00:24:27.294 "recv_buf_size": 4096, 00:24:27.294 "send_buf_size": 4096, 00:24:27.294 "enable_recv_pipe": true, 00:24:27.294 "enable_quickack": false, 00:24:27.294 "enable_placement_id": 0, 00:24:27.294 "enable_zerocopy_send_server": true, 00:24:27.294 "enable_zerocopy_send_client": false, 00:24:27.294 "zerocopy_threshold": 0, 00:24:27.294 "tls_version": 0, 00:24:27.294 "enable_ktls": false 00:24:27.294 } 00:24:27.294 } 00:24:27.294 ] 00:24:27.294 }, 00:24:27.294 { 00:24:27.294 "subsystem": "vmd", 00:24:27.294 "config": [] 00:24:27.295 }, 00:24:27.295 { 00:24:27.295 "subsystem": "accel", 00:24:27.295 "config": [ 00:24:27.295 { 00:24:27.295 "method": "accel_set_options", 00:24:27.295 "params": { 00:24:27.295 "small_cache_size": 128, 00:24:27.295 "large_cache_size": 16, 00:24:27.295 "task_count": 2048, 00:24:27.295 "sequence_count": 2048, 00:24:27.295 "buf_count": 2048 00:24:27.295 } 00:24:27.295 } 00:24:27.295 ] 00:24:27.295 }, 00:24:27.295 { 00:24:27.295 "subsystem": "bdev", 00:24:27.295 "config": [ 00:24:27.295 { 00:24:27.295 "method": "bdev_set_options", 00:24:27.295 "params": { 00:24:27.295 "bdev_io_pool_size": 65535, 00:24:27.295 "bdev_io_cache_size": 256, 00:24:27.295 "bdev_auto_examine": true, 00:24:27.295 "iobuf_small_cache_size": 128, 00:24:27.295 "iobuf_large_cache_size": 16 00:24:27.295 } 00:24:27.295 }, 00:24:27.295 { 00:24:27.295 "method": "bdev_raid_set_options", 00:24:27.295 "params": { 00:24:27.295 "process_window_size_kb": 1024 00:24:27.295 } 00:24:27.295 }, 00:24:27.295 { 00:24:27.295 "method": "bdev_iscsi_set_options", 00:24:27.295 "params": { 00:24:27.295 "timeout_sec": 30 00:24:27.295 } 00:24:27.295 }, 00:24:27.295 { 00:24:27.295 "method": "bdev_nvme_set_options", 00:24:27.295 "params": { 00:24:27.295 "action_on_timeout": "none", 00:24:27.295 "timeout_us": 0, 00:24:27.295 "timeout_admin_us": 0, 00:24:27.295 "keep_alive_timeout_ms": 10000, 00:24:27.295 "arbitration_burst": 0, 00:24:27.295 "low_priority_weight": 0, 00:24:27.295 "medium_priority_weight": 0, 00:24:27.295 "high_priority_weight": 0, 00:24:27.295 "nvme_adminq_poll_period_us": 10000, 00:24:27.295 "nvme_ioq_poll_period_us": 0, 00:24:27.295 "io_queue_requests": 512, 00:24:27.295 "delay_cmd_submit": true, 00:24:27.295 "transport_retry_count": 4, 00:24:27.295 "bdev_retry_count": 3, 00:24:27.295 "transport_ack_timeout": 0, 00:24:27.295 "ctrlr_loss_timeout_sec": 0, 00:24:27.295 "reconnect_delay_sec": 0, 00:24:27.295 "fast_io_fail_timeout_sec": 0, 00:24:27.295 "disable_auto_failback": false, 00:24:27.295 "generate_uuids": false, 00:24:27.295 "transport_tos": 0, 00:24:27.295 "nvme_error_stat": false, 00:24:27.295 "rdma_srq_size": 0, 00:24:27.295 "io_path_stat": false, 00:24:27.295 "allow_accel_sequence": false, 00:24:27.295 "rdma_max_cq_size": 0, 00:24:27.295 "rdma_cm_event_timeout_ms": 0, 00:24:27.295 "dhchap_digests": [ 00:24:27.295 "sha256", 00:24:27.295 "sha384", 00:24:27.295 "sha512" 00:24:27.295 ], 00:24:27.295 "dhchap_dhgroups": [ 00:24:27.295 "null", 00:24:27.295 "ffdhe2048", 00:24:27.295 "ffdhe3072", 00:24:27.295 "ffdhe4096", 00:24:27.295 "ffdhe6144", 00:24:27.295 "ffdhe8192" 00:24:27.295 ] 00:24:27.295 } 00:24:27.295 }, 00:24:27.295 { 00:24:27.295 "method": "bdev_nvme_attach_controller", 00:24:27.295 "params": { 00:24:27.295 "name": "TLSTEST", 00:24:27.295 "trtype": "TCP", 00:24:27.295 "adrfam": "IPv4", 00:24:27.295 "traddr": "10.0.0.2", 00:24:27.295 "trsvcid": "4420", 00:24:27.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.295 "prchk_reftag": false, 00:24:27.295 "prchk_guard": false, 00:24:27.295 "ctrlr_loss_timeout_sec": 0, 00:24:27.295 "reconnect_delay_sec": 0, 00:24:27.295 "fast_io_fail_timeout_sec": 0, 00:24:27.295 "psk": "/tmp/tmp.QHjGbPPhxX", 00:24:27.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:27.295 "hdgst": false, 00:24:27.295 "ddgst": false 00:24:27.295 } 00:24:27.295 }, 00:24:27.295 { 00:24:27.295 "method": "bdev_nvme_set_hotplug", 00:24:27.295 "params": { 00:24:27.295 "period_us": 100000, 00:24:27.295 "enable": false 00:24:27.295 } 00:24:27.295 }, 00:24:27.295 { 00:24:27.295 "method": "bdev_wait_for_examine" 00:24:27.295 } 00:24:27.295 ] 00:24:27.295 }, 00:24:27.295 { 00:24:27.295 "subsystem": "nbd", 00:24:27.295 "config": [] 00:24:27.295 } 00:24:27.295 ] 00:24:27.295 }' 00:24:27.295 [2024-05-15 20:16:19.638443] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:27.295 [2024-05-15 20:16:19.638512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103869 ] 00:24:27.295 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.295 [2024-05-15 20:16:19.696784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.295 [2024-05-15 20:16:19.748914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.557 [2024-05-15 20:16:19.865388] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:27.557 [2024-05-15 20:16:19.865466] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:28.128 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:28.129 20:16:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:28.129 20:16:20 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:28.129 Running I/O for 10 seconds... 00:24:40.361 00:24:40.361 Latency(us) 00:24:40.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.361 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:40.361 Verification LBA range: start 0x0 length 0x2000 00:24:40.361 TLSTESTn1 : 10.03 3109.17 12.15 0.00 0.00 41100.65 4696.75 86507.52 00:24:40.361 =================================================================================================================== 00:24:40.361 Total : 3109.17 12.15 0.00 0.00 41100.65 4696.75 86507.52 00:24:40.361 0 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 103869 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 103869 ']' 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 103869 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103869 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103869' 00:24:40.361 killing process with pid 103869 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 103869 00:24:40.361 Received shutdown signal, test time was about 10.000000 seconds 00:24:40.361 00:24:40.361 Latency(us) 00:24:40.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.361 =================================================================================================================== 00:24:40.361 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:40.361 [2024-05-15 20:16:30.741731] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 103869 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 103538 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 103538 ']' 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 103538 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103538 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103538' 00:24:40.361 killing process with pid 103538 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 103538 00:24:40.361 [2024-05-15 20:16:30.906049] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:40.361 [2024-05-15 20:16:30.906090] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:40.361 20:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 103538 00:24:40.361 20:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:24:40.361 20:16:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:40.361 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:40.361 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.361 20:16:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=105906 00:24:40.361 20:16:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 105906 00:24:40.361 20:16:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:40.361 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 105906 ']' 00:24:40.361 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.361 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:40.361 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.361 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:40.361 20:16:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.361 [2024-05-15 20:16:31.108846] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:40.361 [2024-05-15 20:16:31.108904] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.361 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.361 [2024-05-15 20:16:31.187401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.361 [2024-05-15 20:16:31.276840] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.361 [2024-05-15 20:16:31.276907] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.361 [2024-05-15 20:16:31.276916] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.361 [2024-05-15 20:16:31.276923] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.361 [2024-05-15 20:16:31.276929] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.361 [2024-05-15 20:16:31.276966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.361 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:40.361 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:40.361 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:40.362 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:40.362 20:16:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.362 20:16:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.362 20:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.QHjGbPPhxX 00:24:40.362 20:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QHjGbPPhxX 00:24:40.362 20:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:40.362 [2024-05-15 20:16:32.245910] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.362 20:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:40.362 20:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:40.362 [2024-05-15 20:16:32.670950] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:40.362 [2024-05-15 20:16:32.671026] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:40.362 [2024-05-15 20:16:32.671280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.362 20:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:40.635 malloc0 00:24:40.635 20:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:40.897 20:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QHjGbPPhxX 00:24:40.897 [2024-05-15 20:16:33.338960] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:40.897 20:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=106457 00:24:40.897 20:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:40.897 20:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 106457 /var/tmp/bdevperf.sock 00:24:40.897 20:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 106457 ']' 00:24:40.897 20:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:40.897 20:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:40.897 20:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:40.897 20:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:40.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:40.897 20:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:40.897 20:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.157 [2024-05-15 20:16:33.411941] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:41.157 [2024-05-15 20:16:33.412009] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106457 ] 00:24:41.157 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.157 [2024-05-15 20:16:33.484944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.157 [2024-05-15 20:16:33.557976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.157 20:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:41.157 20:16:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:41.157 20:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QHjGbPPhxX 00:24:41.419 20:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:41.698 [2024-05-15 20:16:34.032193] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:41.698 nvme0n1 00:24:41.698 20:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:41.993 Running I/O for 1 seconds... 00:24:42.936 00:24:42.936 Latency(us) 00:24:42.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.936 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:42.936 Verification LBA range: start 0x0 length 0x2000 00:24:42.936 nvme0n1 : 1.05 2400.50 9.38 0.00 0.00 52058.79 9175.04 118838.61 00:24:42.936 =================================================================================================================== 00:24:42.936 Total : 2400.50 9.38 0.00 0.00 52058.79 9175.04 118838.61 00:24:42.936 0 00:24:42.936 20:16:35 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 106457 00:24:42.936 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 106457 ']' 00:24:42.936 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 106457 00:24:42.936 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:42.936 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:42.936 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106457 00:24:42.936 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:42.936 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:42.936 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106457' 00:24:42.936 killing process with pid 106457 00:24:42.936 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 106457 00:24:42.936 Received shutdown signal, test time was about 1.000000 seconds 00:24:42.936 00:24:42.936 Latency(us) 00:24:42.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.936 =================================================================================================================== 00:24:42.936 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.936 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 106457 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 105906 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 105906 ']' 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 105906 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 105906 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 105906' 00:24:43.197 killing process with pid 105906 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 105906 00:24:43.197 [2024-05-15 20:16:35.532632] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:43.197 [2024-05-15 20:16:35.532679] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 105906 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=106952 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 106952 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 106952 ']' 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:43.197 20:16:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.459 [2024-05-15 20:16:35.731981] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:43.459 [2024-05-15 20:16:35.732033] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.459 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.459 [2024-05-15 20:16:35.823388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.459 [2024-05-15 20:16:35.906447] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.459 [2024-05-15 20:16:35.906514] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.459 [2024-05-15 20:16:35.906522] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.459 [2024-05-15 20:16:35.906529] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.459 [2024-05-15 20:16:35.906535] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.459 [2024-05-15 20:16:35.906572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.402 [2024-05-15 20:16:36.666725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.402 malloc0 00:24:44.402 [2024-05-15 20:16:36.696929] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:44.402 [2024-05-15 20:16:36.697002] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:44.402 [2024-05-15 20:16:36.697265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=107087 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 107087 /var/tmp/bdevperf.sock 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 107087 ']' 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:44.402 20:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.402 [2024-05-15 20:16:36.773621] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:44.402 [2024-05-15 20:16:36.773692] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107087 ] 00:24:44.402 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.402 [2024-05-15 20:16:36.844331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.663 [2024-05-15 20:16:36.917526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.663 20:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:44.663 20:16:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:44.663 20:16:36 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QHjGbPPhxX 00:24:44.924 20:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:44.924 [2024-05-15 20:16:37.387648] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:45.185 nvme0n1 00:24:45.185 20:16:37 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:45.185 Running I/O for 1 seconds... 00:24:46.570 00:24:46.570 Latency(us) 00:24:46.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.570 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:46.570 Verification LBA range: start 0x0 length 0x2000 00:24:46.570 nvme0n1 : 1.06 1919.44 7.50 0.00 0.00 64852.14 8082.77 107042.13 00:24:46.570 =================================================================================================================== 00:24:46.570 Total : 1919.44 7.50 0.00 0.00 64852.14 8082.77 107042.13 00:24:46.570 0 00:24:46.570 20:16:38 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:24:46.570 20:16:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.570 20:16:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.570 20:16:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.570 20:16:38 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:24:46.570 "subsystems": [ 00:24:46.570 { 00:24:46.570 "subsystem": "keyring", 00:24:46.570 "config": [ 00:24:46.570 { 00:24:46.570 "method": "keyring_file_add_key", 00:24:46.570 "params": { 00:24:46.570 "name": "key0", 00:24:46.570 "path": "/tmp/tmp.QHjGbPPhxX" 00:24:46.570 } 00:24:46.570 } 00:24:46.570 ] 00:24:46.570 }, 00:24:46.570 { 00:24:46.570 "subsystem": "iobuf", 00:24:46.570 "config": [ 00:24:46.570 { 00:24:46.570 "method": "iobuf_set_options", 00:24:46.570 "params": { 00:24:46.570 "small_pool_count": 8192, 00:24:46.570 "large_pool_count": 1024, 00:24:46.570 "small_bufsize": 8192, 00:24:46.570 "large_bufsize": 135168 00:24:46.570 } 00:24:46.570 } 00:24:46.570 ] 00:24:46.570 }, 00:24:46.570 { 00:24:46.570 "subsystem": "sock", 00:24:46.570 "config": [ 00:24:46.570 { 00:24:46.570 "method": "sock_impl_set_options", 00:24:46.570 "params": { 00:24:46.570 "impl_name": "posix", 00:24:46.570 "recv_buf_size": 2097152, 00:24:46.570 "send_buf_size": 2097152, 00:24:46.570 "enable_recv_pipe": true, 00:24:46.570 "enable_quickack": false, 00:24:46.570 "enable_placement_id": 0, 00:24:46.570 "enable_zerocopy_send_server": true, 00:24:46.570 "enable_zerocopy_send_client": false, 00:24:46.570 "zerocopy_threshold": 0, 00:24:46.570 "tls_version": 0, 00:24:46.570 "enable_ktls": false 00:24:46.570 } 00:24:46.570 }, 00:24:46.570 { 00:24:46.570 "method": "sock_impl_set_options", 00:24:46.570 "params": { 00:24:46.570 "impl_name": "ssl", 00:24:46.570 "recv_buf_size": 4096, 00:24:46.570 "send_buf_size": 4096, 00:24:46.570 "enable_recv_pipe": true, 00:24:46.570 "enable_quickack": false, 00:24:46.570 "enable_placement_id": 0, 00:24:46.570 "enable_zerocopy_send_server": true, 00:24:46.570 "enable_zerocopy_send_client": false, 00:24:46.570 "zerocopy_threshold": 0, 00:24:46.570 "tls_version": 0, 00:24:46.570 "enable_ktls": false 00:24:46.570 } 00:24:46.570 } 00:24:46.570 ] 00:24:46.570 }, 00:24:46.570 { 00:24:46.570 "subsystem": "vmd", 00:24:46.570 "config": [] 00:24:46.570 }, 00:24:46.570 { 00:24:46.570 "subsystem": "accel", 00:24:46.570 "config": [ 00:24:46.570 { 00:24:46.570 "method": "accel_set_options", 00:24:46.570 "params": { 00:24:46.570 "small_cache_size": 128, 00:24:46.570 "large_cache_size": 16, 00:24:46.570 "task_count": 2048, 00:24:46.570 "sequence_count": 2048, 00:24:46.570 "buf_count": 2048 00:24:46.570 } 00:24:46.570 } 00:24:46.570 ] 00:24:46.570 }, 00:24:46.570 { 00:24:46.570 "subsystem": "bdev", 00:24:46.570 "config": [ 00:24:46.570 { 00:24:46.570 "method": "bdev_set_options", 00:24:46.570 "params": { 00:24:46.570 "bdev_io_pool_size": 65535, 00:24:46.570 "bdev_io_cache_size": 256, 00:24:46.570 "bdev_auto_examine": true, 00:24:46.570 "iobuf_small_cache_size": 128, 00:24:46.570 "iobuf_large_cache_size": 16 00:24:46.570 } 00:24:46.570 }, 00:24:46.570 { 00:24:46.570 "method": "bdev_raid_set_options", 00:24:46.570 "params": { 00:24:46.570 "process_window_size_kb": 1024 00:24:46.570 } 00:24:46.570 }, 00:24:46.570 { 00:24:46.570 "method": "bdev_iscsi_set_options", 00:24:46.570 "params": { 00:24:46.570 "timeout_sec": 30 00:24:46.570 } 00:24:46.570 }, 00:24:46.570 { 00:24:46.570 "method": "bdev_nvme_set_options", 00:24:46.570 "params": { 00:24:46.570 "action_on_timeout": "none", 00:24:46.570 "timeout_us": 0, 00:24:46.570 "timeout_admin_us": 0, 00:24:46.570 "keep_alive_timeout_ms": 10000, 00:24:46.570 "arbitration_burst": 0, 00:24:46.571 "low_priority_weight": 0, 00:24:46.571 "medium_priority_weight": 0, 00:24:46.571 "high_priority_weight": 0, 00:24:46.571 "nvme_adminq_poll_period_us": 10000, 00:24:46.571 "nvme_ioq_poll_period_us": 0, 00:24:46.571 "io_queue_requests": 0, 00:24:46.571 "delay_cmd_submit": true, 00:24:46.571 "transport_retry_count": 4, 00:24:46.571 "bdev_retry_count": 3, 00:24:46.571 "transport_ack_timeout": 0, 00:24:46.571 "ctrlr_loss_timeout_sec": 0, 00:24:46.571 "reconnect_delay_sec": 0, 00:24:46.571 "fast_io_fail_timeout_sec": 0, 00:24:46.571 "disable_auto_failback": false, 00:24:46.571 "generate_uuids": false, 00:24:46.571 "transport_tos": 0, 00:24:46.571 "nvme_error_stat": false, 00:24:46.571 "rdma_srq_size": 0, 00:24:46.571 "io_path_stat": false, 00:24:46.571 "allow_accel_sequence": false, 00:24:46.571 "rdma_max_cq_size": 0, 00:24:46.571 "rdma_cm_event_timeout_ms": 0, 00:24:46.571 "dhchap_digests": [ 00:24:46.571 "sha256", 00:24:46.571 "sha384", 00:24:46.571 "sha512" 00:24:46.571 ], 00:24:46.571 "dhchap_dhgroups": [ 00:24:46.571 "null", 00:24:46.571 "ffdhe2048", 00:24:46.571 "ffdhe3072", 00:24:46.571 "ffdhe4096", 00:24:46.571 "ffdhe6144", 00:24:46.571 "ffdhe8192" 00:24:46.571 ] 00:24:46.571 } 00:24:46.571 }, 00:24:46.571 { 00:24:46.571 "method": "bdev_nvme_set_hotplug", 00:24:46.571 "params": { 00:24:46.571 "period_us": 100000, 00:24:46.571 "enable": false 00:24:46.571 } 00:24:46.571 }, 00:24:46.571 { 00:24:46.571 "method": "bdev_malloc_create", 00:24:46.571 "params": { 00:24:46.571 "name": "malloc0", 00:24:46.571 "num_blocks": 8192, 00:24:46.571 "block_size": 4096, 00:24:46.571 "physical_block_size": 4096, 00:24:46.571 "uuid": "e989f533-30e0-458a-8967-04ff7db28670", 00:24:46.571 "optimal_io_boundary": 0 00:24:46.571 } 00:24:46.571 }, 00:24:46.571 { 00:24:46.571 "method": "bdev_wait_for_examine" 00:24:46.571 } 00:24:46.571 ] 00:24:46.571 }, 00:24:46.571 { 00:24:46.571 "subsystem": "nbd", 00:24:46.571 "config": [] 00:24:46.571 }, 00:24:46.571 { 00:24:46.571 "subsystem": "scheduler", 00:24:46.571 "config": [ 00:24:46.571 { 00:24:46.571 "method": "framework_set_scheduler", 00:24:46.571 "params": { 00:24:46.571 "name": "static" 00:24:46.571 } 00:24:46.571 } 00:24:46.571 ] 00:24:46.571 }, 00:24:46.571 { 00:24:46.571 "subsystem": "nvmf", 00:24:46.571 "config": [ 00:24:46.571 { 00:24:46.571 "method": "nvmf_set_config", 00:24:46.571 "params": { 00:24:46.571 "discovery_filter": "match_any", 00:24:46.571 "admin_cmd_passthru": { 00:24:46.571 "identify_ctrlr": false 00:24:46.571 } 00:24:46.571 } 00:24:46.571 }, 00:24:46.571 { 00:24:46.571 "method": "nvmf_set_max_subsystems", 00:24:46.571 "params": { 00:24:46.571 "max_subsystems": 1024 00:24:46.571 } 00:24:46.571 }, 00:24:46.571 { 00:24:46.571 "method": "nvmf_set_crdt", 00:24:46.571 "params": { 00:24:46.571 "crdt1": 0, 00:24:46.571 "crdt2": 0, 00:24:46.571 "crdt3": 0 00:24:46.571 } 00:24:46.571 }, 00:24:46.571 { 00:24:46.571 "method": "nvmf_create_transport", 00:24:46.571 "params": { 00:24:46.571 "trtype": "TCP", 00:24:46.571 "max_queue_depth": 128, 00:24:46.571 "max_io_qpairs_per_ctrlr": 127, 00:24:46.571 "in_capsule_data_size": 4096, 00:24:46.571 "max_io_size": 131072, 00:24:46.571 "io_unit_size": 131072, 00:24:46.571 "max_aq_depth": 128, 00:24:46.571 "num_shared_buffers": 511, 00:24:46.571 "buf_cache_size": 4294967295, 00:24:46.571 "dif_insert_or_strip": false, 00:24:46.571 "zcopy": false, 00:24:46.571 "c2h_success": false, 00:24:46.571 "sock_priority": 0, 00:24:46.571 "abort_timeout_sec": 1, 00:24:46.571 "ack_timeout": 0, 00:24:46.571 "data_wr_pool_size": 0 00:24:46.571 } 00:24:46.571 }, 00:24:46.571 { 00:24:46.571 "method": "nvmf_create_subsystem", 00:24:46.571 "params": { 00:24:46.571 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.571 "allow_any_host": false, 00:24:46.571 "serial_number": "00000000000000000000", 00:24:46.571 "model_number": "SPDK bdev Controller", 00:24:46.571 "max_namespaces": 32, 00:24:46.571 "min_cntlid": 1, 00:24:46.571 "max_cntlid": 65519, 00:24:46.571 "ana_reporting": false 00:24:46.571 } 00:24:46.571 }, 00:24:46.571 { 00:24:46.571 "method": "nvmf_subsystem_add_host", 00:24:46.571 "params": { 00:24:46.571 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.571 "host": "nqn.2016-06.io.spdk:host1", 00:24:46.571 "psk": "key0" 00:24:46.571 } 00:24:46.571 }, 00:24:46.571 { 00:24:46.571 "method": "nvmf_subsystem_add_ns", 00:24:46.571 "params": { 00:24:46.571 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.571 "namespace": { 00:24:46.571 "nsid": 1, 00:24:46.571 "bdev_name": "malloc0", 00:24:46.571 "nguid": "E989F53330E0458A896704FF7DB28670", 00:24:46.571 "uuid": "e989f533-30e0-458a-8967-04ff7db28670", 00:24:46.571 "no_auto_visible": false 00:24:46.571 } 00:24:46.571 } 00:24:46.571 }, 00:24:46.571 { 00:24:46.571 "method": "nvmf_subsystem_add_listener", 00:24:46.571 "params": { 00:24:46.571 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.571 "listen_address": { 00:24:46.571 "trtype": "TCP", 00:24:46.571 "adrfam": "IPv4", 00:24:46.571 "traddr": "10.0.0.2", 00:24:46.571 "trsvcid": "4420" 00:24:46.571 }, 00:24:46.571 "secure_channel": true 00:24:46.571 } 00:24:46.571 } 00:24:46.571 ] 00:24:46.571 } 00:24:46.571 ] 00:24:46.571 }' 00:24:46.571 20:16:38 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:46.832 20:16:39 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:24:46.832 "subsystems": [ 00:24:46.832 { 00:24:46.832 "subsystem": "keyring", 00:24:46.832 "config": [ 00:24:46.832 { 00:24:46.832 "method": "keyring_file_add_key", 00:24:46.832 "params": { 00:24:46.832 "name": "key0", 00:24:46.832 "path": "/tmp/tmp.QHjGbPPhxX" 00:24:46.832 } 00:24:46.832 } 00:24:46.832 ] 00:24:46.832 }, 00:24:46.832 { 00:24:46.832 "subsystem": "iobuf", 00:24:46.832 "config": [ 00:24:46.832 { 00:24:46.832 "method": "iobuf_set_options", 00:24:46.832 "params": { 00:24:46.832 "small_pool_count": 8192, 00:24:46.832 "large_pool_count": 1024, 00:24:46.832 "small_bufsize": 8192, 00:24:46.832 "large_bufsize": 135168 00:24:46.832 } 00:24:46.832 } 00:24:46.832 ] 00:24:46.832 }, 00:24:46.832 { 00:24:46.832 "subsystem": "sock", 00:24:46.832 "config": [ 00:24:46.832 { 00:24:46.832 "method": "sock_impl_set_options", 00:24:46.832 "params": { 00:24:46.832 "impl_name": "posix", 00:24:46.832 "recv_buf_size": 2097152, 00:24:46.832 "send_buf_size": 2097152, 00:24:46.832 "enable_recv_pipe": true, 00:24:46.832 "enable_quickack": false, 00:24:46.832 "enable_placement_id": 0, 00:24:46.832 "enable_zerocopy_send_server": true, 00:24:46.832 "enable_zerocopy_send_client": false, 00:24:46.832 "zerocopy_threshold": 0, 00:24:46.832 "tls_version": 0, 00:24:46.832 "enable_ktls": false 00:24:46.832 } 00:24:46.832 }, 00:24:46.832 { 00:24:46.832 "method": "sock_impl_set_options", 00:24:46.832 "params": { 00:24:46.832 "impl_name": "ssl", 00:24:46.832 "recv_buf_size": 4096, 00:24:46.832 "send_buf_size": 4096, 00:24:46.832 "enable_recv_pipe": true, 00:24:46.832 "enable_quickack": false, 00:24:46.832 "enable_placement_id": 0, 00:24:46.832 "enable_zerocopy_send_server": true, 00:24:46.832 "enable_zerocopy_send_client": false, 00:24:46.832 "zerocopy_threshold": 0, 00:24:46.832 "tls_version": 0, 00:24:46.832 "enable_ktls": false 00:24:46.832 } 00:24:46.832 } 00:24:46.832 ] 00:24:46.832 }, 00:24:46.832 { 00:24:46.832 "subsystem": "vmd", 00:24:46.832 "config": [] 00:24:46.832 }, 00:24:46.832 { 00:24:46.832 "subsystem": "accel", 00:24:46.832 "config": [ 00:24:46.832 { 00:24:46.832 "method": "accel_set_options", 00:24:46.832 "params": { 00:24:46.832 "small_cache_size": 128, 00:24:46.832 "large_cache_size": 16, 00:24:46.832 "task_count": 2048, 00:24:46.832 "sequence_count": 2048, 00:24:46.832 "buf_count": 2048 00:24:46.832 } 00:24:46.832 } 00:24:46.832 ] 00:24:46.832 }, 00:24:46.832 { 00:24:46.832 "subsystem": "bdev", 00:24:46.832 "config": [ 00:24:46.832 { 00:24:46.832 "method": "bdev_set_options", 00:24:46.832 "params": { 00:24:46.832 "bdev_io_pool_size": 65535, 00:24:46.832 "bdev_io_cache_size": 256, 00:24:46.832 "bdev_auto_examine": true, 00:24:46.832 "iobuf_small_cache_size": 128, 00:24:46.832 "iobuf_large_cache_size": 16 00:24:46.832 } 00:24:46.832 }, 00:24:46.832 { 00:24:46.832 "method": "bdev_raid_set_options", 00:24:46.832 "params": { 00:24:46.832 "process_window_size_kb": 1024 00:24:46.832 } 00:24:46.832 }, 00:24:46.832 { 00:24:46.832 "method": "bdev_iscsi_set_options", 00:24:46.832 "params": { 00:24:46.832 "timeout_sec": 30 00:24:46.832 } 00:24:46.832 }, 00:24:46.832 { 00:24:46.832 "method": "bdev_nvme_set_options", 00:24:46.832 "params": { 00:24:46.832 "action_on_timeout": "none", 00:24:46.832 "timeout_us": 0, 00:24:46.832 "timeout_admin_us": 0, 00:24:46.832 "keep_alive_timeout_ms": 10000, 00:24:46.832 "arbitration_burst": 0, 00:24:46.832 "low_priority_weight": 0, 00:24:46.832 "medium_priority_weight": 0, 00:24:46.832 "high_priority_weight": 0, 00:24:46.832 "nvme_adminq_poll_period_us": 10000, 00:24:46.832 "nvme_ioq_poll_period_us": 0, 00:24:46.832 "io_queue_requests": 512, 00:24:46.832 "delay_cmd_submit": true, 00:24:46.832 "transport_retry_count": 4, 00:24:46.832 "bdev_retry_count": 3, 00:24:46.832 "transport_ack_timeout": 0, 00:24:46.832 "ctrlr_loss_timeout_sec": 0, 00:24:46.832 "reconnect_delay_sec": 0, 00:24:46.832 "fast_io_fail_timeout_sec": 0, 00:24:46.832 "disable_auto_failback": false, 00:24:46.832 "generate_uuids": false, 00:24:46.832 "transport_tos": 0, 00:24:46.832 "nvme_error_stat": false, 00:24:46.832 "rdma_srq_size": 0, 00:24:46.832 "io_path_stat": false, 00:24:46.832 "allow_accel_sequence": false, 00:24:46.832 "rdma_max_cq_size": 0, 00:24:46.832 "rdma_cm_event_timeout_ms": 0, 00:24:46.832 "dhchap_digests": [ 00:24:46.832 "sha256", 00:24:46.832 "sha384", 00:24:46.832 "sha512" 00:24:46.832 ], 00:24:46.832 "dhchap_dhgroups": [ 00:24:46.832 "null", 00:24:46.832 "ffdhe2048", 00:24:46.833 "ffdhe3072", 00:24:46.833 "ffdhe4096", 00:24:46.833 "ffdhe6144", 00:24:46.833 "ffdhe8192" 00:24:46.833 ] 00:24:46.833 } 00:24:46.833 }, 00:24:46.833 { 00:24:46.833 "method": "bdev_nvme_attach_controller", 00:24:46.833 "params": { 00:24:46.833 "name": "nvme0", 00:24:46.833 "trtype": "TCP", 00:24:46.833 "adrfam": "IPv4", 00:24:46.833 "traddr": "10.0.0.2", 00:24:46.833 "trsvcid": "4420", 00:24:46.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.833 "prchk_reftag": false, 00:24:46.833 "prchk_guard": false, 00:24:46.833 "ctrlr_loss_timeout_sec": 0, 00:24:46.833 "reconnect_delay_sec": 0, 00:24:46.833 "fast_io_fail_timeout_sec": 0, 00:24:46.833 "psk": "key0", 00:24:46.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:46.833 "hdgst": false, 00:24:46.833 "ddgst": false 00:24:46.833 } 00:24:46.833 }, 00:24:46.833 { 00:24:46.833 "method": "bdev_nvme_set_hotplug", 00:24:46.833 "params": { 00:24:46.833 "period_us": 100000, 00:24:46.833 "enable": false 00:24:46.833 } 00:24:46.833 }, 00:24:46.833 { 00:24:46.833 "method": "bdev_enable_histogram", 00:24:46.833 "params": { 00:24:46.833 "name": "nvme0n1", 00:24:46.833 "enable": true 00:24:46.833 } 00:24:46.833 }, 00:24:46.833 { 00:24:46.833 "method": "bdev_wait_for_examine" 00:24:46.833 } 00:24:46.833 ] 00:24:46.833 }, 00:24:46.833 { 00:24:46.833 "subsystem": "nbd", 00:24:46.833 "config": [] 00:24:46.833 } 00:24:46.833 ] 00:24:46.833 }' 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 107087 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 107087 ']' 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 107087 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107087 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107087' 00:24:46.833 killing process with pid 107087 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 107087 00:24:46.833 Received shutdown signal, test time was about 1.000000 seconds 00:24:46.833 00:24:46.833 Latency(us) 00:24:46.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.833 =================================================================================================================== 00:24:46.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 107087 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 106952 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 106952 ']' 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 106952 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106952 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106952' 00:24:46.833 killing process with pid 106952 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 106952 00:24:46.833 [2024-05-15 20:16:39.322060] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:46.833 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 106952 00:24:47.094 20:16:39 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:24:47.094 20:16:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:47.094 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:47.094 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.094 20:16:39 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:24:47.094 "subsystems": [ 00:24:47.094 { 00:24:47.094 "subsystem": "keyring", 00:24:47.094 "config": [ 00:24:47.094 { 00:24:47.094 "method": "keyring_file_add_key", 00:24:47.094 "params": { 00:24:47.094 "name": "key0", 00:24:47.094 "path": "/tmp/tmp.QHjGbPPhxX" 00:24:47.094 } 00:24:47.094 } 00:24:47.094 ] 00:24:47.094 }, 00:24:47.094 { 00:24:47.094 "subsystem": "iobuf", 00:24:47.094 "config": [ 00:24:47.094 { 00:24:47.094 "method": "iobuf_set_options", 00:24:47.094 "params": { 00:24:47.094 "small_pool_count": 8192, 00:24:47.094 "large_pool_count": 1024, 00:24:47.094 "small_bufsize": 8192, 00:24:47.094 "large_bufsize": 135168 00:24:47.094 } 00:24:47.094 } 00:24:47.094 ] 00:24:47.094 }, 00:24:47.094 { 00:24:47.094 "subsystem": "sock", 00:24:47.094 "config": [ 00:24:47.094 { 00:24:47.094 "method": "sock_impl_set_options", 00:24:47.094 "params": { 00:24:47.094 "impl_name": "posix", 00:24:47.094 "recv_buf_size": 2097152, 00:24:47.094 "send_buf_size": 2097152, 00:24:47.094 "enable_recv_pipe": true, 00:24:47.094 "enable_quickack": false, 00:24:47.094 "enable_placement_id": 0, 00:24:47.094 "enable_zerocopy_send_server": true, 00:24:47.094 "enable_zerocopy_send_client": false, 00:24:47.094 "zerocopy_threshold": 0, 00:24:47.094 "tls_version": 0, 00:24:47.094 "enable_ktls": false 00:24:47.094 } 00:24:47.094 }, 00:24:47.094 { 00:24:47.094 "method": "sock_impl_set_options", 00:24:47.094 "params": { 00:24:47.094 "impl_name": "ssl", 00:24:47.094 "recv_buf_size": 4096, 00:24:47.094 "send_buf_size": 4096, 00:24:47.094 "enable_recv_pipe": true, 00:24:47.094 "enable_quickack": false, 00:24:47.094 "enable_placement_id": 0, 00:24:47.094 "enable_zerocopy_send_server": true, 00:24:47.094 "enable_zerocopy_send_client": false, 00:24:47.094 "zerocopy_threshold": 0, 00:24:47.094 "tls_version": 0, 00:24:47.094 "enable_ktls": false 00:24:47.094 } 00:24:47.094 } 00:24:47.094 ] 00:24:47.094 }, 00:24:47.094 { 00:24:47.094 "subsystem": "vmd", 00:24:47.094 "config": [] 00:24:47.094 }, 00:24:47.094 { 00:24:47.094 "subsystem": "accel", 00:24:47.094 "config": [ 00:24:47.094 { 00:24:47.094 "method": "accel_set_options", 00:24:47.094 "params": { 00:24:47.094 "small_cache_size": 128, 00:24:47.094 "large_cache_size": 16, 00:24:47.094 "task_count": 2048, 00:24:47.094 "sequence_count": 2048, 00:24:47.094 "buf_count": 2048 00:24:47.094 } 00:24:47.094 } 00:24:47.094 ] 00:24:47.094 }, 00:24:47.094 { 00:24:47.094 "subsystem": "bdev", 00:24:47.094 "config": [ 00:24:47.094 { 00:24:47.094 "method": "bdev_set_options", 00:24:47.094 "params": { 00:24:47.094 "bdev_io_pool_size": 65535, 00:24:47.094 "bdev_io_cache_size": 256, 00:24:47.094 "bdev_auto_examine": true, 00:24:47.094 "iobuf_small_cache_size": 128, 00:24:47.094 "iobuf_large_cache_size": 16 00:24:47.094 } 00:24:47.094 }, 00:24:47.094 { 00:24:47.094 "method": "bdev_raid_set_options", 00:24:47.094 "params": { 00:24:47.094 "process_window_size_kb": 1024 00:24:47.094 } 00:24:47.094 }, 00:24:47.094 { 00:24:47.094 "method": "bdev_iscsi_set_options", 00:24:47.094 "params": { 00:24:47.094 "timeout_sec": 30 00:24:47.094 } 00:24:47.094 }, 00:24:47.094 { 00:24:47.094 "method": "bdev_nvme_set_options", 00:24:47.094 "params": { 00:24:47.094 "action_on_timeout": "none", 00:24:47.094 "timeout_us": 0, 00:24:47.095 "timeout_admin_us": 0, 00:24:47.095 "keep_alive_timeout_ms": 10000, 00:24:47.095 "arbitration_burst": 0, 00:24:47.095 "low_priority_weight": 0, 00:24:47.095 "medium_priority_weight": 0, 00:24:47.095 "high_priority_weight": 0, 00:24:47.095 "nvme_adminq_poll_period_us": 10000, 00:24:47.095 "nvme_ioq_poll_period_us": 0, 00:24:47.095 "io_queue_requests": 0, 00:24:47.095 "delay_cmd_submit": true, 00:24:47.095 "transport_retry_count": 4, 00:24:47.095 "bdev_retry_count": 3, 00:24:47.095 "transport_ack_timeout": 0, 00:24:47.095 "ctrlr_loss_timeout_sec": 0, 00:24:47.095 "reconnect_delay_sec": 0, 00:24:47.095 "fast_io_fail_timeout_sec": 0, 00:24:47.095 "disable_auto_failback": false, 00:24:47.095 "generate_uuids": false, 00:24:47.095 "transport_tos": 0, 00:24:47.095 "nvme_error_stat": false, 00:24:47.095 "rdma_srq_size": 0, 00:24:47.095 "io_path_stat": false, 00:24:47.095 "allow_accel_sequence": false, 00:24:47.095 "rdma_max_cq_size": 0, 00:24:47.095 "rdma_cm_event_timeout_ms": 0, 00:24:47.095 "dhchap_digests": [ 00:24:47.095 "sha256", 00:24:47.095 "sha384", 00:24:47.095 "sha512" 00:24:47.095 ], 00:24:47.095 "dhchap_dhgroups": [ 00:24:47.095 "null", 00:24:47.095 "ffdhe2048", 00:24:47.095 "ffdhe3072", 00:24:47.095 "ffdhe4096", 00:24:47.095 "ffdhe6144", 00:24:47.095 "ffdhe8192" 00:24:47.095 ] 00:24:47.095 } 00:24:47.095 }, 00:24:47.095 { 00:24:47.095 "method": "bdev_nvme_set_hotplug", 00:24:47.095 "params": { 00:24:47.095 "period_us": 100000, 00:24:47.095 "enable": false 00:24:47.095 } 00:24:47.095 }, 00:24:47.095 { 00:24:47.095 "method": "bdev_malloc_create", 00:24:47.095 "params": { 00:24:47.095 "name": "malloc0", 00:24:47.095 "num_blocks": 8192, 00:24:47.095 "block_size": 4096, 00:24:47.095 "physical_block_size": 4096, 00:24:47.095 "uuid": "e989f533-30e0-458a-8967-04ff7db28670", 00:24:47.095 "optimal_io_boundary": 0 00:24:47.095 } 00:24:47.095 }, 00:24:47.095 { 00:24:47.095 "method": "bdev_wait_for_examine" 00:24:47.095 } 00:24:47.095 ] 00:24:47.095 }, 00:24:47.095 { 00:24:47.095 "subsystem": "nbd", 00:24:47.095 "config": [] 00:24:47.095 }, 00:24:47.095 { 00:24:47.095 "subsystem": "scheduler", 00:24:47.095 "config": [ 00:24:47.095 { 00:24:47.095 "method": "framework_set_scheduler", 00:24:47.095 "params": { 00:24:47.095 "name": "static" 00:24:47.095 } 00:24:47.095 } 00:24:47.095 ] 00:24:47.095 }, 00:24:47.095 { 00:24:47.095 "subsystem": "nvmf", 00:24:47.095 "config": [ 00:24:47.095 { 00:24:47.095 "method": "nvmf_set_config", 00:24:47.095 "params": { 00:24:47.095 "discovery_filter": "match_any", 00:24:47.095 "admin_cmd_passthru": { 00:24:47.095 "identify_ctrlr": false 00:24:47.095 } 00:24:47.095 } 00:24:47.095 }, 00:24:47.095 { 00:24:47.095 "method": "nvmf_set_max_subsystems", 00:24:47.095 "params": { 00:24:47.095 "max_subsystems": 1024 00:24:47.095 } 00:24:47.095 }, 00:24:47.095 { 00:24:47.095 "method": "nvmf_set_crdt", 00:24:47.095 "params": { 00:24:47.095 "crdt1": 0, 00:24:47.095 "crdt2": 0, 00:24:47.095 "crdt3": 0 00:24:47.095 } 00:24:47.095 }, 00:24:47.095 { 00:24:47.095 "method": "nvmf_create_transport", 00:24:47.095 "params": { 00:24:47.095 "trtype": "TCP", 00:24:47.095 "max_queue_depth": 128, 00:24:47.095 "max_io_qpairs_per_ctrlr": 127, 00:24:47.095 "in_capsule_data_size": 4096, 00:24:47.095 "max_io_size": 131072, 00:24:47.095 "io_unit_size": 131072, 00:24:47.095 "max_aq_depth": 128, 00:24:47.095 "num_shared_buffers": 511, 00:24:47.095 "buf_cache_size": 4294967295, 00:24:47.095 "dif_insert_or_strip": false, 00:24:47.095 "zcopy": false, 00:24:47.095 "c2h_success": false, 00:24:47.095 "sock_priority": 0, 00:24:47.095 "abort_timeout_sec": 1, 00:24:47.095 "ack_timeout": 0, 00:24:47.095 "data_wr_pool_size": 0 00:24:47.095 } 00:24:47.095 }, 00:24:47.095 { 00:24:47.095 "method": "nvmf_create_subsystem", 00:24:47.095 "params": { 00:24:47.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.095 "allow_any_host": false, 00:24:47.095 "serial_number": "00000000000000000000", 00:24:47.095 "model_number": "SPDK bdev Controller", 00:24:47.095 "max_namespaces": 32, 00:24:47.095 "min_cntlid": 1, 00:24:47.095 "max_cntlid": 65519, 00:24:47.095 "ana_reporting": false 00:24:47.095 } 00:24:47.095 }, 00:24:47.095 { 00:24:47.095 "method": "nvmf_subsystem_add_host", 00:24:47.095 "params": { 00:24:47.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.095 "host": "nqn.2016-06.io.spdk:host1", 00:24:47.095 "psk": "key0" 00:24:47.095 } 00:24:47.095 }, 00:24:47.095 { 00:24:47.095 "method": "nvmf_subsystem_add_ns", 00:24:47.095 "params": { 00:24:47.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.095 "namespace": { 00:24:47.095 "nsid": 1, 00:24:47.095 "bdev_name": "malloc0", 00:24:47.095 "nguid": "E989F53330E0458A896704FF7DB28670", 00:24:47.095 "uuid": "e989f533-30e0-458a-8967-04ff7db28670", 00:24:47.095 "no_auto_visible": false 00:24:47.095 } 00:24:47.095 } 00:24:47.095 }, 00:24:47.095 { 00:24:47.095 "method": "nvmf_subsystem_add_listener", 00:24:47.095 "params": { 00:24:47.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.095 "listen_address": { 00:24:47.095 "trtype": "TCP", 00:24:47.095 "adrfam": "IPv4", 00:24:47.095 "traddr": "10.0.0.2", 00:24:47.095 "trsvcid": "4420" 00:24:47.095 }, 00:24:47.095 "secure_channel": true 00:24:47.095 } 00:24:47.095 } 00:24:47.095 ] 00:24:47.095 } 00:24:47.095 ] 00:24:47.095 }' 00:24:47.095 20:16:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=107662 00:24:47.095 20:16:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 107662 00:24:47.095 20:16:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:47.095 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 107662 ']' 00:24:47.095 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.095 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:47.095 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.095 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:47.095 20:16:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.095 [2024-05-15 20:16:39.517206] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:47.095 [2024-05-15 20:16:39.517259] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.095 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.356 [2024-05-15 20:16:39.605149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.356 [2024-05-15 20:16:39.668892] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.356 [2024-05-15 20:16:39.668930] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.356 [2024-05-15 20:16:39.668938] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.356 [2024-05-15 20:16:39.668944] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.356 [2024-05-15 20:16:39.668950] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.356 [2024-05-15 20:16:39.669001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.617 [2024-05-15 20:16:39.857889] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.617 [2024-05-15 20:16:39.889874] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:47.617 [2024-05-15 20:16:39.889920] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:47.617 [2024-05-15 20:16:39.908509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.879 20:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:47.879 20:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:47.879 20:16:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:47.879 20:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:47.879 20:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.140 20:16:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.140 20:16:40 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=107877 00:24:48.140 20:16:40 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 107877 /var/tmp/bdevperf.sock 00:24:48.140 20:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 107877 ']' 00:24:48.140 20:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.140 20:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:48.140 20:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.140 20:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:48.140 20:16:40 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:48.140 20:16:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.140 20:16:40 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:24:48.140 "subsystems": [ 00:24:48.140 { 00:24:48.140 "subsystem": "keyring", 00:24:48.140 "config": [ 00:24:48.140 { 00:24:48.140 "method": "keyring_file_add_key", 00:24:48.140 "params": { 00:24:48.140 "name": "key0", 00:24:48.140 "path": "/tmp/tmp.QHjGbPPhxX" 00:24:48.140 } 00:24:48.140 } 00:24:48.140 ] 00:24:48.140 }, 00:24:48.140 { 00:24:48.140 "subsystem": "iobuf", 00:24:48.140 "config": [ 00:24:48.140 { 00:24:48.140 "method": "iobuf_set_options", 00:24:48.140 "params": { 00:24:48.140 "small_pool_count": 8192, 00:24:48.140 "large_pool_count": 1024, 00:24:48.140 "small_bufsize": 8192, 00:24:48.140 "large_bufsize": 135168 00:24:48.140 } 00:24:48.140 } 00:24:48.140 ] 00:24:48.140 }, 00:24:48.140 { 00:24:48.140 "subsystem": "sock", 00:24:48.140 "config": [ 00:24:48.140 { 00:24:48.140 "method": "sock_impl_set_options", 00:24:48.140 "params": { 00:24:48.140 "impl_name": "posix", 00:24:48.140 "recv_buf_size": 2097152, 00:24:48.140 "send_buf_size": 2097152, 00:24:48.140 "enable_recv_pipe": true, 00:24:48.140 "enable_quickack": false, 00:24:48.141 "enable_placement_id": 0, 00:24:48.141 "enable_zerocopy_send_server": true, 00:24:48.141 "enable_zerocopy_send_client": false, 00:24:48.141 "zerocopy_threshold": 0, 00:24:48.141 "tls_version": 0, 00:24:48.141 "enable_ktls": false 00:24:48.141 } 00:24:48.141 }, 00:24:48.141 { 00:24:48.141 "method": "sock_impl_set_options", 00:24:48.141 "params": { 00:24:48.141 "impl_name": "ssl", 00:24:48.141 "recv_buf_size": 4096, 00:24:48.141 "send_buf_size": 4096, 00:24:48.141 "enable_recv_pipe": true, 00:24:48.141 "enable_quickack": false, 00:24:48.141 "enable_placement_id": 0, 00:24:48.141 "enable_zerocopy_send_server": true, 00:24:48.141 "enable_zerocopy_send_client": false, 00:24:48.141 "zerocopy_threshold": 0, 00:24:48.141 "tls_version": 0, 00:24:48.141 "enable_ktls": false 00:24:48.141 } 00:24:48.141 } 00:24:48.141 ] 00:24:48.141 }, 00:24:48.141 { 00:24:48.141 "subsystem": "vmd", 00:24:48.141 "config": [] 00:24:48.141 }, 00:24:48.141 { 00:24:48.141 "subsystem": "accel", 00:24:48.141 "config": [ 00:24:48.141 { 00:24:48.141 "method": "accel_set_options", 00:24:48.141 "params": { 00:24:48.141 "small_cache_size": 128, 00:24:48.141 "large_cache_size": 16, 00:24:48.141 "task_count": 2048, 00:24:48.141 "sequence_count": 2048, 00:24:48.141 "buf_count": 2048 00:24:48.141 } 00:24:48.141 } 00:24:48.141 ] 00:24:48.141 }, 00:24:48.141 { 00:24:48.141 "subsystem": "bdev", 00:24:48.141 "config": [ 00:24:48.141 { 00:24:48.141 "method": "bdev_set_options", 00:24:48.141 "params": { 00:24:48.141 "bdev_io_pool_size": 65535, 00:24:48.141 "bdev_io_cache_size": 256, 00:24:48.141 "bdev_auto_examine": true, 00:24:48.141 "iobuf_small_cache_size": 128, 00:24:48.141 "iobuf_large_cache_size": 16 00:24:48.141 } 00:24:48.141 }, 00:24:48.141 { 00:24:48.141 "method": "bdev_raid_set_options", 00:24:48.141 "params": { 00:24:48.141 "process_window_size_kb": 1024 00:24:48.141 } 00:24:48.141 }, 00:24:48.141 { 00:24:48.141 "method": "bdev_iscsi_set_options", 00:24:48.141 "params": { 00:24:48.141 "timeout_sec": 30 00:24:48.141 } 00:24:48.141 }, 00:24:48.141 { 00:24:48.141 "method": "bdev_nvme_set_options", 00:24:48.141 "params": { 00:24:48.141 "action_on_timeout": "none", 00:24:48.141 "timeout_us": 0, 00:24:48.141 "timeout_admin_us": 0, 00:24:48.141 "keep_alive_timeout_ms": 10000, 00:24:48.141 "arbitration_burst": 0, 00:24:48.141 "low_priority_weight": 0, 00:24:48.141 "medium_priority_weight": 0, 00:24:48.141 "high_priority_weight": 0, 00:24:48.141 "nvme_adminq_poll_period_us": 10000, 00:24:48.141 "nvme_ioq_poll_period_us": 0, 00:24:48.141 "io_queue_requests": 512, 00:24:48.141 "delay_cmd_submit": true, 00:24:48.141 "transport_retry_count": 4, 00:24:48.141 "bdev_retry_count": 3, 00:24:48.141 "transport_ack_timeout": 0, 00:24:48.141 "ctrlr_loss_timeout_sec": 0, 00:24:48.141 "reconnect_delay_sec": 0, 00:24:48.141 "fast_io_fail_timeout_sec": 0, 00:24:48.141 "disable_auto_failback": false, 00:24:48.141 "generate_uuids": false, 00:24:48.141 "transport_tos": 0, 00:24:48.141 "nvme_error_stat": false, 00:24:48.141 "rdma_srq_size": 0, 00:24:48.141 "io_path_stat": false, 00:24:48.141 "allow_accel_sequence": false, 00:24:48.141 "rdma_max_cq_size": 0, 00:24:48.141 "rdma_cm_event_timeout_ms": 0, 00:24:48.141 "dhchap_digests": [ 00:24:48.141 "sha256", 00:24:48.141 "sha384", 00:24:48.141 "sha512" 00:24:48.141 ], 00:24:48.141 "dhchap_dhgroups": [ 00:24:48.141 "null", 00:24:48.141 "ffdhe2048", 00:24:48.141 "ffdhe3072", 00:24:48.141 "ffdhe4096", 00:24:48.141 "ffdhe6144", 00:24:48.141 "ffdhe8192" 00:24:48.141 ] 00:24:48.141 } 00:24:48.141 }, 00:24:48.141 { 00:24:48.141 "method": "bdev_nvme_attach_controller", 00:24:48.141 "params": { 00:24:48.141 "name": "nvme0", 00:24:48.141 "trtype": "TCP", 00:24:48.141 "adrfam": "IPv4", 00:24:48.141 "traddr": "10.0.0.2", 00:24:48.141 "trsvcid": "4420", 00:24:48.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.141 "prchk_reftag": false, 00:24:48.141 "prchk_guard": false, 00:24:48.141 "ctrlr_loss_timeout_sec": 0, 00:24:48.141 "reconnect_delay_sec": 0, 00:24:48.141 "fast_io_fail_timeout_sec": 0, 00:24:48.141 "psk": "key0", 00:24:48.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:48.141 "hdgst": false, 00:24:48.141 "ddgst": false 00:24:48.141 } 00:24:48.141 }, 00:24:48.141 { 00:24:48.141 "method": "bdev_nvme_set_hotplug", 00:24:48.141 "params": { 00:24:48.141 "period_us": 100000, 00:24:48.141 "enable": false 00:24:48.141 } 00:24:48.141 }, 00:24:48.141 { 00:24:48.141 "method": "bdev_enable_histogram", 00:24:48.141 "params": { 00:24:48.141 "name": "nvme0n1", 00:24:48.141 "enable": true 00:24:48.141 } 00:24:48.141 }, 00:24:48.141 { 00:24:48.141 "method": "bdev_wait_for_examine" 00:24:48.141 } 00:24:48.141 ] 00:24:48.141 }, 00:24:48.141 { 00:24:48.141 "subsystem": "nbd", 00:24:48.141 "config": [] 00:24:48.141 } 00:24:48.141 ] 00:24:48.141 }' 00:24:48.141 [2024-05-15 20:16:40.465728] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:24:48.141 [2024-05-15 20:16:40.465779] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107877 ] 00:24:48.141 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.141 [2024-05-15 20:16:40.531526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.141 [2024-05-15 20:16:40.595886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.402 [2024-05-15 20:16:40.726460] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:48.972 20:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:48.972 20:16:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:48.972 20:16:41 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:48.972 20:16:41 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:24:49.233 20:16:41 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.233 20:16:41 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:49.233 Running I/O for 1 seconds... 00:24:50.176 00:24:50.176 Latency(us) 00:24:50.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.176 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:50.176 Verification LBA range: start 0x0 length 0x2000 00:24:50.176 nvme0n1 : 1.07 1824.03 7.13 0.00 0.00 68239.68 6471.68 159034.03 00:24:50.176 =================================================================================================================== 00:24:50.176 Total : 1824.03 7.13 0.00 0.00 68239.68 6471.68 159034.03 00:24:50.176 0 00:24:50.176 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:24:50.176 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:24:50.176 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:50.176 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:24:50.176 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:24:50.176 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:24:50.176 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:50.436 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:24:50.436 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:24:50.436 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:24:50.437 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:50.437 nvmf_trace.0 00:24:50.437 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:24:50.437 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 107877 00:24:50.437 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 107877 ']' 00:24:50.437 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 107877 00:24:50.437 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:50.437 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:50.437 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107877 00:24:50.437 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:50.437 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:50.437 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107877' 00:24:50.437 killing process with pid 107877 00:24:50.437 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 107877 00:24:50.437 Received shutdown signal, test time was about 1.000000 seconds 00:24:50.437 00:24:50.437 Latency(us) 00:24:50.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.437 =================================================================================================================== 00:24:50.437 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.437 20:16:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 107877 00:24:50.697 20:16:42 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:50.697 20:16:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:50.697 20:16:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:24:50.697 20:16:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:50.697 20:16:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:24:50.697 20:16:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:50.697 20:16:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:50.697 rmmod nvme_tcp 00:24:50.697 rmmod nvme_fabrics 00:24:50.697 rmmod nvme_keyring 00:24:50.697 20:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:50.697 20:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:50.697 20:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:50.697 20:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 107662 ']' 00:24:50.697 20:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 107662 00:24:50.697 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 107662 ']' 00:24:50.697 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 107662 00:24:50.697 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:50.697 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:50.697 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107662 00:24:50.697 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:50.697 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:50.697 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107662' 00:24:50.697 killing process with pid 107662 00:24:50.697 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 107662 00:24:50.697 [2024-05-15 20:16:43.085708] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:50.698 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 107662 00:24:50.958 20:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:50.958 20:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:50.958 20:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:50.958 20:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.958 20:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.958 20:16:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.958 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.958 20:16:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.872 20:16:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:52.872 20:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.7NCws4YL50 /tmp/tmp.eM0TWQqvHe /tmp/tmp.QHjGbPPhxX 00:24:52.872 00:24:52.872 real 1m21.713s 00:24:52.872 user 2m3.412s 00:24:52.872 sys 0m28.616s 00:24:52.872 20:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:52.872 20:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.872 ************************************ 00:24:52.872 END TEST nvmf_tls 00:24:52.872 ************************************ 00:24:52.872 20:16:45 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:52.872 20:16:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:52.872 20:16:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:52.872 20:16:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:53.134 ************************************ 00:24:53.134 START TEST nvmf_fips 00:24:53.134 ************************************ 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:53.134 * Looking for test storage... 00:24:53.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:53.134 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:53.135 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:24:53.397 Error setting digest 00:24:53.397 002211C51D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:53.397 002211C51D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:53.397 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.398 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.398 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.398 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:53.398 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:53.398 20:16:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:53.398 20:16:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.545 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:01.546 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:01.546 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:01.546 Found net devices under 0000:31:00.0: cvl_0_0 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:01.546 Found net devices under 0000:31:00.1: cvl_0_1 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:01.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:25:01.546 00:25:01.546 --- 10.0.0.2 ping statistics --- 00:25:01.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.546 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.424 ms 00:25:01.546 00:25:01.546 --- 10.0.0.1 ping statistics --- 00:25:01.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.546 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=113081 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 113081 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 113081 ']' 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:01.546 20:16:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:01.546 [2024-05-15 20:16:53.949695] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:25:01.546 [2024-05-15 20:16:53.949768] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.546 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.546 [2024-05-15 20:16:54.027183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.808 [2024-05-15 20:16:54.099467] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.808 [2024-05-15 20:16:54.099504] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.808 [2024-05-15 20:16:54.099511] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.808 [2024-05-15 20:16:54.099518] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.808 [2024-05-15 20:16:54.099523] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.808 [2024-05-15 20:16:54.099541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.380 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:02.380 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:25:02.380 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:02.380 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:02.380 20:16:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:02.380 20:16:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.380 20:16:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:25:02.380 20:16:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:02.380 20:16:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:02.380 20:16:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:02.380 20:16:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:02.380 20:16:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:02.380 20:16:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:02.380 20:16:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:02.641 [2024-05-15 20:16:55.022652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.641 [2024-05-15 20:16:55.038635] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:02.641 [2024-05-15 20:16:55.038677] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:02.641 [2024-05-15 20:16:55.038839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.641 [2024-05-15 20:16:55.065412] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:02.641 malloc0 00:25:02.641 20:16:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:02.641 20:16:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=113434 00:25:02.641 20:16:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 113434 /var/tmp/bdevperf.sock 00:25:02.641 20:16:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:02.641 20:16:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 113434 ']' 00:25:02.641 20:16:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:02.641 20:16:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:02.641 20:16:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:02.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:02.641 20:16:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:02.641 20:16:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:02.902 [2024-05-15 20:16:55.155248] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:25:02.902 [2024-05-15 20:16:55.155297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113434 ] 00:25:02.902 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.902 [2024-05-15 20:16:55.209355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.902 [2024-05-15 20:16:55.261378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.902 20:16:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:02.902 20:16:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:25:02.902 20:16:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:03.163 [2024-05-15 20:16:55.512722] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:03.163 [2024-05-15 20:16:55.512783] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:03.163 TLSTESTn1 00:25:03.163 20:16:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:03.424 Running I/O for 10 seconds... 00:25:13.427 00:25:13.427 Latency(us) 00:25:13.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.427 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:13.427 Verification LBA range: start 0x0 length 0x2000 00:25:13.427 TLSTESTn1 : 10.05 3627.22 14.17 0.00 0.00 35198.18 5570.56 60293.12 00:25:13.427 =================================================================================================================== 00:25:13.427 Total : 3627.22 14.17 0.00 0.00 35198.18 5570.56 60293.12 00:25:13.427 0 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:13.427 nvmf_trace.0 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 113434 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 113434 ']' 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 113434 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:13.427 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 113434 00:25:13.687 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:25:13.687 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:25:13.687 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 113434' 00:25:13.687 killing process with pid 113434 00:25:13.687 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 113434 00:25:13.687 Received shutdown signal, test time was about 10.000000 seconds 00:25:13.687 00:25:13.687 Latency(us) 00:25:13.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.687 =================================================================================================================== 00:25:13.687 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.687 [2024-05-15 20:17:05.953666] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:13.688 20:17:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 113434 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:13.688 rmmod nvme_tcp 00:25:13.688 rmmod nvme_fabrics 00:25:13.688 rmmod nvme_keyring 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 113081 ']' 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 113081 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 113081 ']' 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 113081 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 113081 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 113081' 00:25:13.688 killing process with pid 113081 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 113081 00:25:13.688 [2024-05-15 20:17:06.181380] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:13.688 [2024-05-15 20:17:06.181416] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:13.688 20:17:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 113081 00:25:13.949 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:13.949 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:13.949 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:13.949 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:13.949 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:13.949 20:17:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.949 20:17:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.949 20:17:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:16.497 00:25:16.497 real 0m23.010s 00:25:16.497 user 0m22.743s 00:25:16.497 sys 0m10.469s 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:16.497 ************************************ 00:25:16.497 END TEST nvmf_fips 00:25:16.497 ************************************ 00:25:16.497 20:17:08 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:25:16.497 20:17:08 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:16.497 20:17:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:16.497 20:17:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:16.497 20:17:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:16.497 ************************************ 00:25:16.497 START TEST nvmf_fuzz 00:25:16.497 ************************************ 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:16.497 * Looking for test storage... 00:25:16.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:16.497 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:16.498 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:16.498 20:17:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:16.498 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:16.498 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.498 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:16.498 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:16.498 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:16.498 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.498 20:17:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.498 20:17:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.498 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:16.498 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:16.498 20:17:08 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:25:16.498 20:17:08 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:24.646 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:24.646 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:24.646 Found net devices under 0000:31:00.0: cvl_0_0 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:24.646 Found net devices under 0000:31:00.1: cvl_0_1 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:24.646 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:24.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:24.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.790 ms 00:25:24.647 00:25:24.647 --- 10.0.0.2 ping statistics --- 00:25:24.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.647 rtt min/avg/max/mdev = 0.790/0.790/0.790/0.000 ms 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:24.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:24.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:25:24.647 00:25:24.647 --- 10.0.0.1 ping statistics --- 00:25:24.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.647 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=120127 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 120127 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 120127 ']' 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:24.647 20:17:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.594 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:25.594 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:25:25.594 20:17:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:25.594 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.594 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.594 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.594 20:17:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:25.594 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.594 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.594 Malloc0 00:25:25.594 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.595 20:17:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:25.595 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.595 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.595 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.595 20:17:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:25.595 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.595 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.595 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.595 20:17:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.595 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.595 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.595 20:17:17 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.595 20:17:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:25.595 20:17:17 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:57.766 Fuzzing completed. Shutting down the fuzz application 00:25:57.766 00:25:57.766 Dumping successful admin opcodes: 00:25:57.766 8, 9, 10, 24, 00:25:57.766 Dumping successful io opcodes: 00:25:57.766 0, 9, 00:25:57.766 NS: 0x200003aeff00 I/O qp, Total commands completed: 814654, total successful commands: 4732, random_seed: 3902875072 00:25:57.766 NS: 0x200003aeff00 admin qp, Total commands completed: 104833, total successful commands: 865, random_seed: 1342991296 00:25:57.766 20:17:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:57.766 Fuzzing completed. Shutting down the fuzz application 00:25:57.766 00:25:57.766 Dumping successful admin opcodes: 00:25:57.766 24, 00:25:57.766 Dumping successful io opcodes: 00:25:57.766 00:25:57.766 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1577288269 00:25:57.766 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1577386419 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:57.766 rmmod nvme_tcp 00:25:57.766 rmmod nvme_fabrics 00:25:57.766 rmmod nvme_keyring 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 120127 ']' 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 120127 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 120127 ']' 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 120127 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 120127 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 120127' 00:25:57.766 killing process with pid 120127 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 120127 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 120127 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:57.766 20:17:49 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.690 20:17:52 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:59.690 20:17:52 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:59.690 00:25:59.690 real 0m43.596s 00:25:59.690 user 0m56.590s 00:25:59.690 sys 0m16.230s 00:25:59.690 20:17:52 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:59.690 20:17:52 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:59.690 ************************************ 00:25:59.690 END TEST nvmf_fuzz 00:25:59.690 ************************************ 00:25:59.690 20:17:52 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:59.690 20:17:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:59.690 20:17:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:59.690 20:17:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:59.690 ************************************ 00:25:59.690 START TEST nvmf_multiconnection 00:25:59.691 ************************************ 00:25:59.691 20:17:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:59.954 * Looking for test storage... 00:25:59.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:59.954 20:17:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.954 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:59.954 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.954 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.954 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.954 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.954 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.954 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.954 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.954 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.954 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.954 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.954 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:59.954 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:25:59.955 20:17:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:08.103 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:08.104 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:08.104 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:08.104 Found net devices under 0000:31:00.0: cvl_0_0 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:08.104 Found net devices under 0000:31:00.1: cvl_0_1 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:08.104 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:08.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:26:08.366 00:26:08.366 --- 10.0.0.2 ping statistics --- 00:26:08.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.366 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:08.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.370 ms 00:26:08.366 00:26:08.366 --- 10.0.0.1 ping statistics --- 00:26:08.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.366 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=131172 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 131172 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 131172 ']' 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:08.366 20:18:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:08.366 [2024-05-15 20:18:00.750475] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:26:08.366 [2024-05-15 20:18:00.750542] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.366 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.366 [2024-05-15 20:18:00.846693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:08.627 [2024-05-15 20:18:00.944106] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.627 [2024-05-15 20:18:00.944162] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.627 [2024-05-15 20:18:00.944170] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.627 [2024-05-15 20:18:00.944177] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.627 [2024-05-15 20:18:00.944184] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.627 [2024-05-15 20:18:00.944335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.627 [2024-05-15 20:18:00.944430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.627 [2024-05-15 20:18:00.944746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:08.627 [2024-05-15 20:18:00.944749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.200 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:09.200 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:26:09.200 20:18:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:09.200 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:09.200 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.200 20:18:01 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.200 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:09.200 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.200 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.200 [2024-05-15 20:18:01.684063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.200 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.200 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:09.200 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.200 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:09.200 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.200 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.461 Malloc1 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.461 [2024-05-15 20:18:01.751015] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:09.461 [2024-05-15 20:18:01.751244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.461 Malloc2 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.461 Malloc3 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.461 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.462 Malloc4 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.462 Malloc5 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.462 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.724 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:09.724 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 Malloc6 00:26:09.724 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:09.724 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:09.724 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 Malloc7 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 Malloc8 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 Malloc9 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 Malloc10 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.724 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.987 Malloc11 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.987 20:18:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:11.416 20:18:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:11.416 20:18:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:11.416 20:18:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:11.416 20:18:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:11.416 20:18:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:13.961 20:18:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:13.961 20:18:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:13.961 20:18:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:26:13.961 20:18:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:13.961 20:18:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:13.962 20:18:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:13.962 20:18:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.962 20:18:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:14.903 20:18:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:14.903 20:18:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:14.903 20:18:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:14.903 20:18:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:14.903 20:18:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:17.447 20:18:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:17.447 20:18:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:17.447 20:18:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:26:17.447 20:18:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:17.447 20:18:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:17.447 20:18:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:17.447 20:18:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.448 20:18:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:18.839 20:18:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:18.839 20:18:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:18.839 20:18:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.839 20:18:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:18.839 20:18:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:20.752 20:18:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:20.752 20:18:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:20.752 20:18:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:26:20.752 20:18:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:20.752 20:18:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.752 20:18:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:20.752 20:18:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.752 20:18:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:22.665 20:18:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:22.665 20:18:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:22.665 20:18:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:22.665 20:18:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:22.665 20:18:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:24.579 20:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:24.579 20:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:24.579 20:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:26:24.579 20:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:24.579 20:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:24.579 20:18:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:24.579 20:18:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.579 20:18:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:25.966 20:18:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:25.966 20:18:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:25.966 20:18:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:25.966 20:18:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:25.966 20:18:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:28.511 20:18:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:28.511 20:18:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:28.511 20:18:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:26:28.511 20:18:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:28.511 20:18:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:28.511 20:18:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:28.511 20:18:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:28.511 20:18:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:29.895 20:18:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:29.895 20:18:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:29.895 20:18:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:29.895 20:18:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:29.895 20:18:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:31.808 20:18:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:31.808 20:18:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:31.808 20:18:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:26:31.808 20:18:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:31.808 20:18:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:31.808 20:18:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:31.808 20:18:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.808 20:18:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:33.817 20:18:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:33.817 20:18:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:33.817 20:18:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:33.817 20:18:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:33.817 20:18:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:35.734 20:18:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:35.734 20:18:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:35.734 20:18:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:26:35.734 20:18:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:35.734 20:18:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:35.734 20:18:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:35.734 20:18:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.735 20:18:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:37.648 20:18:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:37.648 20:18:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:37.648 20:18:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:37.648 20:18:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:37.648 20:18:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:39.564 20:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:39.564 20:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:39.564 20:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:26:39.564 20:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:39.564 20:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:39.564 20:18:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:39.564 20:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:39.564 20:18:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:41.482 20:18:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:41.482 20:18:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:41.482 20:18:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:41.482 20:18:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:41.482 20:18:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:43.399 20:18:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:43.399 20:18:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:43.399 20:18:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:26:43.399 20:18:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:43.399 20:18:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:43.399 20:18:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:43.399 20:18:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.399 20:18:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:45.313 20:18:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:45.313 20:18:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:45.313 20:18:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:45.313 20:18:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:45.313 20:18:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:47.228 20:18:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:47.228 20:18:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:47.228 20:18:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:26:47.228 20:18:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:47.228 20:18:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:47.228 20:18:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:47.228 20:18:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.228 20:18:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:49.140 20:18:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:49.140 20:18:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:49.140 20:18:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:49.140 20:18:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:49.140 20:18:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:51.054 20:18:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:51.054 20:18:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:51.054 20:18:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:26:51.054 20:18:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:51.054 20:18:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:51.054 20:18:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:51.054 20:18:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:51.054 [global] 00:26:51.054 thread=1 00:26:51.054 invalidate=1 00:26:51.054 rw=read 00:26:51.054 time_based=1 00:26:51.054 runtime=10 00:26:51.054 ioengine=libaio 00:26:51.054 direct=1 00:26:51.054 bs=262144 00:26:51.054 iodepth=64 00:26:51.054 norandommap=1 00:26:51.054 numjobs=1 00:26:51.054 00:26:51.054 [job0] 00:26:51.054 filename=/dev/nvme0n1 00:26:51.054 [job1] 00:26:51.054 filename=/dev/nvme10n1 00:26:51.054 [job2] 00:26:51.054 filename=/dev/nvme1n1 00:26:51.054 [job3] 00:26:51.054 filename=/dev/nvme2n1 00:26:51.054 [job4] 00:26:51.054 filename=/dev/nvme3n1 00:26:51.054 [job5] 00:26:51.054 filename=/dev/nvme4n1 00:26:51.054 [job6] 00:26:51.054 filename=/dev/nvme5n1 00:26:51.054 [job7] 00:26:51.054 filename=/dev/nvme6n1 00:26:51.054 [job8] 00:26:51.054 filename=/dev/nvme7n1 00:26:51.054 [job9] 00:26:51.054 filename=/dev/nvme8n1 00:26:51.054 [job10] 00:26:51.054 filename=/dev/nvme9n1 00:26:51.313 Could not set queue depth (nvme0n1) 00:26:51.313 Could not set queue depth (nvme10n1) 00:26:51.313 Could not set queue depth (nvme1n1) 00:26:51.313 Could not set queue depth (nvme2n1) 00:26:51.313 Could not set queue depth (nvme3n1) 00:26:51.313 Could not set queue depth (nvme4n1) 00:26:51.313 Could not set queue depth (nvme5n1) 00:26:51.313 Could not set queue depth (nvme6n1) 00:26:51.313 Could not set queue depth (nvme7n1) 00:26:51.313 Could not set queue depth (nvme8n1) 00:26:51.313 Could not set queue depth (nvme9n1) 00:26:51.572 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:51.572 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:51.572 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:51.572 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:51.572 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:51.572 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:51.572 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:51.572 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:51.572 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:51.572 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:51.572 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:51.572 fio-3.35 00:26:51.572 Starting 11 threads 00:27:03.809 00:27:03.809 job0: (groupid=0, jobs=1): err= 0: pid=140448: Wed May 15 20:18:54 2024 00:27:03.809 read: IOPS=886, BW=222MiB/s (232MB/s)(2225MiB/10046msec) 00:27:03.809 slat (usec): min=8, max=95829, avg=1044.72, stdev=3013.68 00:27:03.809 clat (msec): min=4, max=226, avg=71.10, stdev=26.78 00:27:03.809 lat (msec): min=4, max=231, avg=72.15, stdev=27.22 00:27:03.809 clat percentiles (msec): 00:27:03.809 | 1.00th=[ 18], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 49], 00:27:03.809 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 80], 00:27:03.809 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 116], 00:27:03.809 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 150], 99.95th=[ 155], 00:27:03.809 | 99.99th=[ 226] 00:27:03.809 bw ( KiB/s): min=154112, max=400384, per=9.61%, avg=226252.80, stdev=65436.96, samples=20 00:27:03.809 iops : min= 602, max= 1564, avg=883.80, stdev=255.61, samples=20 00:27:03.809 lat (msec) : 10=0.08%, 20=1.27%, 50=21.21%, 100=61.68%, 250=15.76% 00:27:03.809 cpu : usr=0.33%, sys=2.94%, ctx=2028, majf=0, minf=4097 00:27:03.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:03.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:03.809 issued rwts: total=8901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:03.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:03.809 job1: (groupid=0, jobs=1): err= 0: pid=140449: Wed May 15 20:18:54 2024 00:27:03.809 read: IOPS=841, BW=210MiB/s (221MB/s)(2111MiB/10034msec) 00:27:03.809 slat (usec): min=8, max=109782, avg=1003.39, stdev=3392.29 00:27:03.809 clat (msec): min=2, max=196, avg=74.95, stdev=30.86 00:27:03.809 lat (msec): min=2, max=229, avg=75.95, stdev=31.22 00:27:03.809 clat percentiles (msec): 00:27:03.809 | 1.00th=[ 15], 5.00th=[ 27], 10.00th=[ 42], 20.00th=[ 54], 00:27:03.809 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 75], 00:27:03.809 | 70.00th=[ 89], 80.00th=[ 102], 90.00th=[ 116], 95.00th=[ 136], 00:27:03.809 | 99.00th=[ 161], 99.50th=[ 171], 99.90th=[ 180], 99.95th=[ 188], 00:27:03.809 | 99.99th=[ 197] 00:27:03.809 bw ( KiB/s): min=123904, max=279040, per=9.11%, avg=214529.85, stdev=49979.33, samples=20 00:27:03.809 iops : min= 484, max= 1090, avg=838.00, stdev=195.23, samples=20 00:27:03.809 lat (msec) : 4=0.19%, 10=0.45%, 20=1.43%, 50=12.81%, 100=63.93% 00:27:03.809 lat (msec) : 250=21.19% 00:27:03.809 cpu : usr=0.40%, sys=2.92%, ctx=2027, majf=0, minf=4097 00:27:03.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:03.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:03.809 issued rwts: total=8444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:03.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:03.809 job2: (groupid=0, jobs=1): err= 0: pid=140450: Wed May 15 20:18:54 2024 00:27:03.809 read: IOPS=741, BW=185MiB/s (194MB/s)(1870MiB/10082msec) 00:27:03.809 slat (usec): min=9, max=53817, avg=1292.99, stdev=3463.04 00:27:03.809 clat (msec): min=8, max=187, avg=84.87, stdev=23.26 00:27:03.809 lat (msec): min=8, max=187, avg=86.16, stdev=23.68 00:27:03.809 clat percentiles (msec): 00:27:03.809 | 1.00th=[ 21], 5.00th=[ 46], 10.00th=[ 55], 20.00th=[ 69], 00:27:03.809 | 30.00th=[ 75], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 91], 00:27:03.809 | 70.00th=[ 97], 80.00th=[ 105], 90.00th=[ 114], 95.00th=[ 121], 00:27:03.809 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 176], 99.95th=[ 188], 00:27:03.809 | 99.99th=[ 188] 00:27:03.809 bw ( KiB/s): min=135168, max=282624, per=8.06%, avg=189807.50, stdev=41064.79, samples=20 00:27:03.809 iops : min= 528, max= 1104, avg=741.40, stdev=160.43, samples=20 00:27:03.809 lat (msec) : 10=0.11%, 20=0.87%, 50=6.41%, 100=68.11%, 250=24.51% 00:27:03.809 cpu : usr=0.28%, sys=2.74%, ctx=1708, majf=0, minf=4097 00:27:03.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:03.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:03.809 issued rwts: total=7478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:03.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:03.809 job3: (groupid=0, jobs=1): err= 0: pid=140451: Wed May 15 20:18:54 2024 00:27:03.809 read: IOPS=766, BW=192MiB/s (201MB/s)(1932MiB/10085msec) 00:27:03.809 slat (usec): min=8, max=91828, avg=1100.54, stdev=3434.52 00:27:03.809 clat (msec): min=4, max=199, avg=82.30, stdev=28.42 00:27:03.809 lat (msec): min=4, max=199, avg=83.40, stdev=28.84 00:27:03.809 clat percentiles (msec): 00:27:03.809 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 42], 20.00th=[ 57], 00:27:03.809 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 91], 00:27:03.809 | 70.00th=[ 100], 80.00th=[ 106], 90.00th=[ 115], 95.00th=[ 123], 00:27:03.809 | 99.00th=[ 140], 99.50th=[ 159], 99.90th=[ 188], 99.95th=[ 194], 00:27:03.809 | 99.99th=[ 201] 00:27:03.809 bw ( KiB/s): min=143360, max=310272, per=8.33%, avg=196249.60, stdev=42447.47, samples=20 00:27:03.809 iops : min= 560, max= 1212, avg=766.60, stdev=165.81, samples=20 00:27:03.809 lat (msec) : 10=0.96%, 20=1.60%, 50=11.30%, 100=57.86%, 250=28.28% 00:27:03.809 cpu : usr=0.31%, sys=2.29%, ctx=1847, majf=0, minf=4097 00:27:03.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:03.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:03.809 issued rwts: total=7729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:03.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:03.809 job4: (groupid=0, jobs=1): err= 0: pid=140452: Wed May 15 20:18:54 2024 00:27:03.809 read: IOPS=774, BW=194MiB/s (203MB/s)(1945MiB/10046msec) 00:27:03.809 slat (usec): min=7, max=47013, avg=1164.66, stdev=3130.27 00:27:03.809 clat (msec): min=7, max=146, avg=81.38, stdev=23.19 00:27:03.809 lat (msec): min=7, max=156, avg=82.54, stdev=23.59 00:27:03.809 clat percentiles (msec): 00:27:03.809 | 1.00th=[ 29], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 61], 00:27:03.809 | 30.00th=[ 67], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 87], 00:27:03.809 | 70.00th=[ 94], 80.00th=[ 104], 90.00th=[ 112], 95.00th=[ 121], 00:27:03.809 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:27:03.809 | 99.99th=[ 146] 00:27:03.809 bw ( KiB/s): min=134144, max=286720, per=8.39%, avg=197580.80, stdev=45857.57, samples=20 00:27:03.809 iops : min= 524, max= 1120, avg=771.80, stdev=179.13, samples=20 00:27:03.809 lat (msec) : 10=0.06%, 20=0.30%, 50=6.12%, 100=69.71%, 250=23.81% 00:27:03.809 cpu : usr=0.26%, sys=2.33%, ctx=1835, majf=0, minf=4097 00:27:03.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:03.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:03.809 issued rwts: total=7781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:03.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:03.809 job5: (groupid=0, jobs=1): err= 0: pid=140453: Wed May 15 20:18:54 2024 00:27:03.809 read: IOPS=826, BW=207MiB/s (217MB/s)(2084MiB/10087msec) 00:27:03.809 slat (usec): min=8, max=65554, avg=998.11, stdev=3025.72 00:27:03.809 clat (msec): min=3, max=200, avg=76.35, stdev=32.70 00:27:03.809 lat (msec): min=3, max=200, avg=77.35, stdev=33.23 00:27:03.809 clat percentiles (msec): 00:27:03.809 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 45], 00:27:03.809 | 30.00th=[ 51], 40.00th=[ 66], 50.00th=[ 81], 60.00th=[ 90], 00:27:03.809 | 70.00th=[ 99], 80.00th=[ 106], 90.00th=[ 115], 95.00th=[ 128], 00:27:03.809 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 197], 99.95th=[ 199], 00:27:03.809 | 99.99th=[ 201] 00:27:03.809 bw ( KiB/s): min=136192, max=348160, per=8.99%, avg=211721.15, stdev=66517.94, samples=20 00:27:03.809 iops : min= 532, max= 1360, avg=827.00, stdev=259.86, samples=20 00:27:03.809 lat (msec) : 4=0.01%, 10=0.85%, 20=2.23%, 50=25.95%, 100=43.30% 00:27:03.809 lat (msec) : 250=27.65% 00:27:03.809 cpu : usr=0.36%, sys=2.51%, ctx=2071, majf=0, minf=4097 00:27:03.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:03.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:03.809 issued rwts: total=8334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:03.809 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:03.809 job6: (groupid=0, jobs=1): err= 0: pid=140454: Wed May 15 20:18:54 2024 00:27:03.809 read: IOPS=838, BW=210MiB/s (220MB/s)(2102MiB/10033msec) 00:27:03.809 slat (usec): min=8, max=44330, avg=1002.81, stdev=2890.43 00:27:03.809 clat (msec): min=4, max=166, avg=75.25, stdev=27.96 00:27:03.809 lat (msec): min=4, max=179, avg=76.26, stdev=28.37 00:27:03.809 clat percentiles (msec): 00:27:03.809 | 1.00th=[ 27], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 53], 00:27:03.809 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 78], 00:27:03.809 | 70.00th=[ 90], 80.00th=[ 102], 90.00th=[ 115], 95.00th=[ 132], 00:27:03.809 | 99.00th=[ 148], 99.50th=[ 150], 99.90th=[ 163], 99.95th=[ 165], 00:27:03.809 | 99.99th=[ 167] 00:27:03.809 bw ( KiB/s): min=124928, max=313856, per=9.07%, avg=213603.40, stdev=62655.45, samples=20 00:27:03.809 iops : min= 488, max= 1226, avg=834.35, stdev=244.70, samples=20 00:27:03.809 lat (msec) : 10=0.02%, 20=0.36%, 50=15.07%, 100=64.06%, 250=20.49% 00:27:03.809 cpu : usr=0.40%, sys=2.43%, ctx=2093, majf=0, minf=4097 00:27:03.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:03.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:03.809 issued rwts: total=8408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:03.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:03.810 job7: (groupid=0, jobs=1): err= 0: pid=140455: Wed May 15 20:18:54 2024 00:27:03.810 read: IOPS=685, BW=171MiB/s (180MB/s)(1729MiB/10088msec) 00:27:03.810 slat (usec): min=8, max=92294, avg=1284.61, stdev=3519.64 00:27:03.810 clat (msec): min=4, max=202, avg=91.95, stdev=25.45 00:27:03.810 lat (msec): min=4, max=221, avg=93.24, stdev=25.82 00:27:03.810 clat percentiles (msec): 00:27:03.810 | 1.00th=[ 15], 5.00th=[ 42], 10.00th=[ 62], 20.00th=[ 77], 00:27:03.810 | 30.00th=[ 83], 40.00th=[ 89], 50.00th=[ 93], 60.00th=[ 99], 00:27:03.810 | 70.00th=[ 106], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 127], 00:27:03.810 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 192], 99.95th=[ 192], 00:27:03.810 | 99.99th=[ 203] 00:27:03.810 bw ( KiB/s): min=134144, max=252416, per=7.45%, avg=175420.75, stdev=29390.30, samples=20 00:27:03.810 iops : min= 524, max= 986, avg=685.20, stdev=114.83, samples=20 00:27:03.810 lat (msec) : 10=0.40%, 20=1.34%, 50=4.48%, 100=56.69%, 250=37.08% 00:27:03.810 cpu : usr=0.20%, sys=2.20%, ctx=1660, majf=0, minf=3534 00:27:03.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:03.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:03.810 issued rwts: total=6917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:03.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:03.810 job8: (groupid=0, jobs=1): err= 0: pid=140456: Wed May 15 20:18:54 2024 00:27:03.810 read: IOPS=836, BW=209MiB/s (219MB/s)(2111MiB/10087msec) 00:27:03.810 slat (usec): min=7, max=70928, avg=987.93, stdev=3207.66 00:27:03.810 clat (msec): min=3, max=203, avg=75.40, stdev=33.44 00:27:03.810 lat (msec): min=3, max=203, avg=76.39, stdev=33.90 00:27:03.810 clat percentiles (msec): 00:27:03.810 | 1.00th=[ 15], 5.00th=[ 30], 10.00th=[ 34], 20.00th=[ 41], 00:27:03.810 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 74], 60.00th=[ 88], 00:27:03.810 | 70.00th=[ 99], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 129], 00:27:03.810 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 182], 99.95th=[ 194], 00:27:03.810 | 99.99th=[ 205] 00:27:03.810 bw ( KiB/s): min=132873, max=415232, per=9.11%, avg=214515.65, stdev=80947.97, samples=20 00:27:03.810 iops : min= 519, max= 1622, avg=837.95, stdev=316.20, samples=20 00:27:03.810 lat (msec) : 4=0.01%, 10=0.34%, 20=1.52%, 50=25.94%, 100=43.82% 00:27:03.810 lat (msec) : 250=28.37% 00:27:03.810 cpu : usr=0.34%, sys=2.54%, ctx=2107, majf=0, minf=4097 00:27:03.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:03.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:03.810 issued rwts: total=8442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:03.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:03.810 job9: (groupid=0, jobs=1): err= 0: pid=140457: Wed May 15 20:18:54 2024 00:27:03.810 read: IOPS=650, BW=163MiB/s (171MB/s)(1641MiB/10088msec) 00:27:03.810 slat (usec): min=9, max=99728, avg=1514.43, stdev=3934.17 00:27:03.810 clat (msec): min=8, max=183, avg=96.80, stdev=22.13 00:27:03.810 lat (msec): min=8, max=204, avg=98.32, stdev=22.40 00:27:03.810 clat percentiles (msec): 00:27:03.810 | 1.00th=[ 25], 5.00th=[ 66], 10.00th=[ 77], 20.00th=[ 83], 00:27:03.810 | 30.00th=[ 88], 40.00th=[ 93], 50.00th=[ 97], 60.00th=[ 102], 00:27:03.810 | 70.00th=[ 107], 80.00th=[ 113], 90.00th=[ 122], 95.00th=[ 130], 00:27:03.810 | 99.00th=[ 153], 99.50th=[ 171], 99.90th=[ 176], 99.95th=[ 184], 00:27:03.810 | 99.99th=[ 184] 00:27:03.810 bw ( KiB/s): min=127743, max=263168, per=7.07%, avg=166361.55, stdev=29881.83, samples=20 00:27:03.810 iops : min= 498, max= 1028, avg=649.80, stdev=116.79, samples=20 00:27:03.810 lat (msec) : 10=0.05%, 20=0.70%, 50=2.91%, 100=53.29%, 250=43.05% 00:27:03.810 cpu : usr=0.22%, sys=2.47%, ctx=1411, majf=0, minf=4097 00:27:03.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:03.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:03.810 issued rwts: total=6562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:03.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:03.810 job10: (groupid=0, jobs=1): err= 0: pid=140458: Wed May 15 20:18:54 2024 00:27:03.810 read: IOPS=1377, BW=344MiB/s (361MB/s)(3449MiB/10011msec) 00:27:03.810 slat (usec): min=8, max=25709, avg=721.55, stdev=1799.82 00:27:03.810 clat (msec): min=9, max=124, avg=45.69, stdev=17.13 00:27:03.810 lat (msec): min=11, max=124, avg=46.41, stdev=17.35 00:27:03.810 clat percentiles (msec): 00:27:03.810 | 1.00th=[ 29], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 34], 00:27:03.810 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 37], 60.00th=[ 41], 00:27:03.810 | 70.00th=[ 53], 80.00th=[ 63], 90.00th=[ 72], 95.00th=[ 80], 00:27:03.810 | 99.00th=[ 97], 99.50th=[ 107], 99.90th=[ 117], 99.95th=[ 120], 00:27:03.810 | 99.99th=[ 125] 00:27:03.810 bw ( KiB/s): min=207872, max=476672, per=14.93%, avg=351513.60, stdev=95335.02, samples=20 00:27:03.810 iops : min= 812, max= 1862, avg=1373.10, stdev=372.40, samples=20 00:27:03.810 lat (msec) : 10=0.01%, 20=0.15%, 50=68.30%, 100=30.66%, 250=0.88% 00:27:03.810 cpu : usr=0.43%, sys=4.36%, ctx=2727, majf=0, minf=4097 00:27:03.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:27:03.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:03.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:03.810 issued rwts: total=13794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:03.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:03.810 00:27:03.810 Run status group 0 (all jobs): 00:27:03.810 READ: bw=2300MiB/s (2411MB/s), 163MiB/s-344MiB/s (171MB/s-361MB/s), io=22.7GiB (24.3GB), run=10011-10088msec 00:27:03.810 00:27:03.810 Disk stats (read/write): 00:27:03.810 nvme0n1: ios=17364/0, merge=0/0, ticks=1218976/0, in_queue=1218976, util=96.43% 00:27:03.810 nvme10n1: ios=16485/0, merge=0/0, ticks=1221431/0, in_queue=1221431, util=96.69% 00:27:03.810 nvme1n1: ios=14667/0, merge=0/0, ticks=1215629/0, in_queue=1215629, util=97.07% 00:27:03.810 nvme2n1: ios=15166/0, merge=0/0, ticks=1219260/0, in_queue=1219260, util=97.27% 00:27:03.810 nvme3n1: ios=15148/0, merge=0/0, ticks=1219632/0, in_queue=1219632, util=97.37% 00:27:03.810 nvme4n1: ios=16393/0, merge=0/0, ticks=1220795/0, in_queue=1220795, util=97.86% 00:27:03.810 nvme5n1: ios=16405/0, merge=0/0, ticks=1220123/0, in_queue=1220123, util=98.07% 00:27:03.810 nvme6n1: ios=13545/0, merge=0/0, ticks=1219337/0, in_queue=1219337, util=98.28% 00:27:03.810 nvme7n1: ios=16591/0, merge=0/0, ticks=1221971/0, in_queue=1221971, util=98.74% 00:27:03.810 nvme8n1: ios=12818/0, merge=0/0, ticks=1216364/0, in_queue=1216364, util=98.98% 00:27:03.810 nvme9n1: ios=26902/0, merge=0/0, ticks=1223304/0, in_queue=1223304, util=99.16% 00:27:03.810 20:18:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:03.810 [global] 00:27:03.810 thread=1 00:27:03.810 invalidate=1 00:27:03.810 rw=randwrite 00:27:03.810 time_based=1 00:27:03.810 runtime=10 00:27:03.810 ioengine=libaio 00:27:03.810 direct=1 00:27:03.810 bs=262144 00:27:03.810 iodepth=64 00:27:03.810 norandommap=1 00:27:03.810 numjobs=1 00:27:03.810 00:27:03.810 [job0] 00:27:03.810 filename=/dev/nvme0n1 00:27:03.810 [job1] 00:27:03.810 filename=/dev/nvme10n1 00:27:03.810 [job2] 00:27:03.810 filename=/dev/nvme1n1 00:27:03.810 [job3] 00:27:03.810 filename=/dev/nvme2n1 00:27:03.810 [job4] 00:27:03.810 filename=/dev/nvme3n1 00:27:03.810 [job5] 00:27:03.810 filename=/dev/nvme4n1 00:27:03.810 [job6] 00:27:03.810 filename=/dev/nvme5n1 00:27:03.810 [job7] 00:27:03.810 filename=/dev/nvme6n1 00:27:03.810 [job8] 00:27:03.810 filename=/dev/nvme7n1 00:27:03.810 [job9] 00:27:03.810 filename=/dev/nvme8n1 00:27:03.810 [job10] 00:27:03.810 filename=/dev/nvme9n1 00:27:03.810 Could not set queue depth (nvme0n1) 00:27:03.810 Could not set queue depth (nvme10n1) 00:27:03.810 Could not set queue depth (nvme1n1) 00:27:03.810 Could not set queue depth (nvme2n1) 00:27:03.810 Could not set queue depth (nvme3n1) 00:27:03.810 Could not set queue depth (nvme4n1) 00:27:03.810 Could not set queue depth (nvme5n1) 00:27:03.810 Could not set queue depth (nvme6n1) 00:27:03.810 Could not set queue depth (nvme7n1) 00:27:03.810 Could not set queue depth (nvme8n1) 00:27:03.810 Could not set queue depth (nvme9n1) 00:27:03.810 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.810 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.810 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.810 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.810 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.810 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.810 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.810 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.810 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.810 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.810 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:03.810 fio-3.35 00:27:03.810 Starting 11 threads 00:27:13.815 00:27:13.815 job0: (groupid=0, jobs=1): err= 0: pid=142538: Wed May 15 20:19:06 2024 00:27:13.815 write: IOPS=676, BW=169MiB/s (177MB/s)(1704MiB/10074msec); 0 zone resets 00:27:13.815 slat (usec): min=15, max=46143, avg=1254.56, stdev=2784.37 00:27:13.815 clat (msec): min=2, max=211, avg=93.33, stdev=36.69 00:27:13.815 lat (msec): min=2, max=211, avg=94.58, stdev=37.23 00:27:13.815 clat percentiles (msec): 00:27:13.815 | 1.00th=[ 11], 5.00th=[ 28], 10.00th=[ 38], 20.00th=[ 65], 00:27:13.815 | 30.00th=[ 77], 40.00th=[ 92], 50.00th=[ 103], 60.00th=[ 107], 00:27:13.815 | 70.00th=[ 108], 80.00th=[ 125], 90.00th=[ 136], 95.00th=[ 148], 00:27:13.815 | 99.00th=[ 176], 99.50th=[ 190], 99.90th=[ 203], 99.95th=[ 205], 00:27:13.815 | 99.99th=[ 211] 00:27:13.815 bw ( KiB/s): min=116736, max=288768, per=10.17%, avg=172869.70, stdev=46336.35, samples=20 00:27:13.815 iops : min= 456, max= 1128, avg=675.25, stdev=181.00, samples=20 00:27:13.815 lat (msec) : 4=0.10%, 10=0.70%, 20=1.98%, 50=12.81%, 100=30.07% 00:27:13.815 lat (msec) : 250=54.34% 00:27:13.815 cpu : usr=1.52%, sys=2.21%, ctx=2871, majf=0, minf=1 00:27:13.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:13.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:13.816 issued rwts: total=0,6815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.816 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:13.816 job1: (groupid=0, jobs=1): err= 0: pid=142551: Wed May 15 20:19:06 2024 00:27:13.816 write: IOPS=525, BW=131MiB/s (138MB/s)(1331MiB/10127msec); 0 zone resets 00:27:13.816 slat (usec): min=20, max=36130, avg=1689.11, stdev=3280.24 00:27:13.816 clat (msec): min=17, max=266, avg=120.01, stdev=25.05 00:27:13.816 lat (msec): min=17, max=266, avg=121.70, stdev=25.31 00:27:13.816 clat percentiles (msec): 00:27:13.816 | 1.00th=[ 29], 5.00th=[ 82], 10.00th=[ 100], 20.00th=[ 107], 00:27:13.816 | 30.00th=[ 112], 40.00th=[ 115], 50.00th=[ 123], 60.00th=[ 129], 00:27:13.816 | 70.00th=[ 132], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 155], 00:27:13.816 | 99.00th=[ 165], 99.50th=[ 207], 99.90th=[ 257], 99.95th=[ 259], 00:27:13.816 | 99.99th=[ 268] 00:27:13.816 bw ( KiB/s): min=114688, max=169472, per=7.93%, avg=134681.60, stdev=15888.49, samples=20 00:27:13.816 iops : min= 448, max= 662, avg=526.10, stdev=62.06, samples=20 00:27:13.816 lat (msec) : 20=0.17%, 50=2.37%, 100=8.94%, 250=88.41%, 500=0.11% 00:27:13.816 cpu : usr=1.18%, sys=1.58%, ctx=1888, majf=0, minf=1 00:27:13.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:13.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:13.816 issued rwts: total=0,5324,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.816 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:13.816 job2: (groupid=0, jobs=1): err= 0: pid=142557: Wed May 15 20:19:06 2024 00:27:13.816 write: IOPS=561, BW=140MiB/s (147MB/s)(1420MiB/10118msec); 0 zone resets 00:27:13.816 slat (usec): min=24, max=148314, avg=1660.38, stdev=4026.49 00:27:13.816 clat (msec): min=3, max=281, avg=112.15, stdev=34.09 00:27:13.816 lat (msec): min=3, max=281, avg=113.81, stdev=34.53 00:27:13.816 clat percentiles (msec): 00:27:13.816 | 1.00th=[ 28], 5.00th=[ 61], 10.00th=[ 67], 20.00th=[ 71], 00:27:13.816 | 30.00th=[ 101], 40.00th=[ 123], 50.00th=[ 127], 60.00th=[ 130], 00:27:13.816 | 70.00th=[ 131], 80.00th=[ 132], 90.00th=[ 138], 95.00th=[ 148], 00:27:13.816 | 99.00th=[ 201], 99.50th=[ 222], 99.90th=[ 271], 99.95th=[ 275], 00:27:13.816 | 99.99th=[ 284] 00:27:13.816 bw ( KiB/s): min=106496, max=239616, per=8.46%, avg=143820.80, stdev=41750.65, samples=20 00:27:13.816 iops : min= 416, max= 936, avg=561.80, stdev=163.09, samples=20 00:27:13.816 lat (msec) : 4=0.02%, 10=0.09%, 20=0.32%, 50=3.34%, 100=26.25% 00:27:13.816 lat (msec) : 250=69.76%, 500=0.23% 00:27:13.816 cpu : usr=1.10%, sys=1.73%, ctx=1772, majf=0, minf=1 00:27:13.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:13.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:13.816 issued rwts: total=0,5681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.816 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:13.816 job3: (groupid=0, jobs=1): err= 0: pid=142558: Wed May 15 20:19:06 2024 00:27:13.816 write: IOPS=569, BW=142MiB/s (149MB/s)(1443MiB/10131msec); 0 zone resets 00:27:13.816 slat (usec): min=21, max=30042, avg=1454.99, stdev=3011.64 00:27:13.816 clat (msec): min=4, max=262, avg=110.81, stdev=33.05 00:27:13.816 lat (msec): min=4, max=262, avg=112.27, stdev=33.54 00:27:13.816 clat percentiles (msec): 00:27:13.816 | 1.00th=[ 19], 5.00th=[ 41], 10.00th=[ 62], 20.00th=[ 101], 00:27:13.816 | 30.00th=[ 106], 40.00th=[ 107], 50.00th=[ 109], 60.00th=[ 116], 00:27:13.816 | 70.00th=[ 128], 80.00th=[ 136], 90.00th=[ 148], 95.00th=[ 161], 00:27:13.816 | 99.00th=[ 176], 99.50th=[ 203], 99.90th=[ 253], 99.95th=[ 253], 00:27:13.816 | 99.99th=[ 264] 00:27:13.816 bw ( KiB/s): min=100352, max=209408, per=8.60%, avg=146164.10, stdev=25354.83, samples=20 00:27:13.816 iops : min= 392, max= 818, avg=570.95, stdev=99.04, samples=20 00:27:13.816 lat (msec) : 10=0.23%, 20=1.04%, 50=5.54%, 100=14.19%, 250=78.90% 00:27:13.816 lat (msec) : 500=0.10% 00:27:13.816 cpu : usr=1.35%, sys=1.83%, ctx=2424, majf=0, minf=1 00:27:13.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:13.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:13.816 issued rwts: total=0,5773,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.816 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:13.816 job4: (groupid=0, jobs=1): err= 0: pid=142559: Wed May 15 20:19:06 2024 00:27:13.816 write: IOPS=859, BW=215MiB/s (225MB/s)(2163MiB/10063msec); 0 zone resets 00:27:13.816 slat (usec): min=20, max=24986, avg=979.73, stdev=1952.20 00:27:13.816 clat (msec): min=2, max=154, avg=73.44, stdev=20.28 00:27:13.816 lat (msec): min=3, max=156, avg=74.42, stdev=20.50 00:27:13.816 clat percentiles (msec): 00:27:13.816 | 1.00th=[ 16], 5.00th=[ 41], 10.00th=[ 59], 20.00th=[ 65], 00:27:13.816 | 30.00th=[ 67], 40.00th=[ 69], 50.00th=[ 70], 60.00th=[ 72], 00:27:13.816 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 103], 95.00th=[ 113], 00:27:13.816 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 148], 99.95th=[ 153], 00:27:13.816 | 99.99th=[ 155] 00:27:13.816 bw ( KiB/s): min=148480, max=259072, per=12.94%, avg=219903.10, stdev=31706.00, samples=20 00:27:13.816 iops : min= 580, max= 1012, avg=858.95, stdev=123.81, samples=20 00:27:13.816 lat (msec) : 4=0.02%, 10=0.28%, 20=1.29%, 50=5.94%, 100=81.38% 00:27:13.816 lat (msec) : 250=11.08% 00:27:13.816 cpu : usr=1.88%, sys=2.64%, ctx=3258, majf=0, minf=1 00:27:13.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:13.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:13.816 issued rwts: total=0,8652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.816 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:13.816 job5: (groupid=0, jobs=1): err= 0: pid=142563: Wed May 15 20:19:06 2024 00:27:13.816 write: IOPS=741, BW=185MiB/s (194MB/s)(1875MiB/10120msec); 0 zone resets 00:27:13.816 slat (usec): min=24, max=13602, avg=1258.39, stdev=2436.31 00:27:13.816 clat (msec): min=2, max=241, avg=85.03, stdev=31.45 00:27:13.816 lat (msec): min=2, max=249, avg=86.28, stdev=31.90 00:27:13.816 clat percentiles (msec): 00:27:13.816 | 1.00th=[ 22], 5.00th=[ 50], 10.00th=[ 57], 20.00th=[ 63], 00:27:13.816 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:27:13.816 | 70.00th=[ 102], 80.00th=[ 129], 90.00th=[ 131], 95.00th=[ 132], 00:27:13.816 | 99.00th=[ 136], 99.50th=[ 169], 99.90th=[ 234], 99.95th=[ 243], 00:27:13.816 | 99.99th=[ 243] 00:27:13.816 bw ( KiB/s): min=124416, max=297472, per=11.21%, avg=190412.80, stdev=60017.99, samples=20 00:27:13.816 iops : min= 486, max= 1162, avg=743.80, stdev=234.45, samples=20 00:27:13.816 lat (msec) : 4=0.08%, 10=0.25%, 20=0.63%, 50=4.25%, 100=64.63% 00:27:13.816 lat (msec) : 250=30.16% 00:27:13.816 cpu : usr=1.64%, sys=2.11%, ctx=2392, majf=0, minf=1 00:27:13.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:13.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:13.816 issued rwts: total=0,7501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.816 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:13.816 job6: (groupid=0, jobs=1): err= 0: pid=142564: Wed May 15 20:19:06 2024 00:27:13.816 write: IOPS=572, BW=143MiB/s (150MB/s)(1450MiB/10129msec); 0 zone resets 00:27:13.816 slat (usec): min=24, max=54132, avg=1530.42, stdev=3288.14 00:27:13.816 clat (msec): min=10, max=264, avg=110.18, stdev=37.18 00:27:13.816 lat (msec): min=13, max=264, avg=111.71, stdev=37.69 00:27:13.816 clat percentiles (msec): 00:27:13.816 | 1.00th=[ 23], 5.00th=[ 41], 10.00th=[ 55], 20.00th=[ 77], 00:27:13.816 | 30.00th=[ 106], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 121], 00:27:13.816 | 70.00th=[ 132], 80.00th=[ 140], 90.00th=[ 155], 95.00th=[ 161], 00:27:13.816 | 99.00th=[ 182], 99.50th=[ 205], 99.90th=[ 257], 99.95th=[ 257], 00:27:13.816 | 99.99th=[ 266] 00:27:13.816 bw ( KiB/s): min=102400, max=290816, per=8.64%, avg=146841.60, stdev=39815.11, samples=20 00:27:13.816 iops : min= 400, max= 1136, avg=573.60, stdev=155.53, samples=20 00:27:13.816 lat (msec) : 20=0.36%, 50=7.16%, 100=18.92%, 250=73.46%, 500=0.10% 00:27:13.816 cpu : usr=1.35%, sys=1.82%, ctx=2187, majf=0, minf=1 00:27:13.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:13.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:13.816 issued rwts: total=0,5799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.816 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:13.816 job7: (groupid=0, jobs=1): err= 0: pid=142566: Wed May 15 20:19:06 2024 00:27:13.816 write: IOPS=576, BW=144MiB/s (151MB/s)(1453MiB/10075msec); 0 zone resets 00:27:13.816 slat (usec): min=26, max=25530, avg=1716.65, stdev=3006.99 00:27:13.816 clat (msec): min=14, max=163, avg=109.21, stdev=19.66 00:27:13.816 lat (msec): min=14, max=163, avg=110.93, stdev=19.75 00:27:13.816 clat percentiles (msec): 00:27:13.816 | 1.00th=[ 72], 5.00th=[ 77], 10.00th=[ 80], 20.00th=[ 100], 00:27:13.816 | 30.00th=[ 104], 40.00th=[ 107], 50.00th=[ 108], 60.00th=[ 110], 00:27:13.816 | 70.00th=[ 121], 80.00th=[ 130], 90.00th=[ 133], 95.00th=[ 142], 00:27:13.816 | 99.00th=[ 150], 99.50th=[ 157], 99.90th=[ 163], 99.95th=[ 163], 00:27:13.816 | 99.99th=[ 163] 00:27:13.816 bw ( KiB/s): min=114688, max=207360, per=8.66%, avg=147148.80, stdev=21587.50, samples=20 00:27:13.816 iops : min= 448, max= 810, avg=574.80, stdev=84.33, samples=20 00:27:13.816 lat (msec) : 20=0.07%, 50=0.41%, 100=22.68%, 250=76.84% 00:27:13.816 cpu : usr=1.48%, sys=1.78%, ctx=1502, majf=0, minf=1 00:27:13.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:13.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:13.816 issued rwts: total=0,5811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.816 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:13.816 job8: (groupid=0, jobs=1): err= 0: pid=142567: Wed May 15 20:19:06 2024 00:27:13.816 write: IOPS=490, BW=123MiB/s (129MB/s)(1242MiB/10129msec); 0 zone resets 00:27:13.816 slat (usec): min=26, max=47292, avg=1986.42, stdev=3617.06 00:27:13.816 clat (msec): min=12, max=264, avg=128.45, stdev=24.58 00:27:13.816 lat (msec): min=12, max=264, avg=130.43, stdev=24.70 00:27:13.816 clat percentiles (msec): 00:27:13.817 | 1.00th=[ 28], 5.00th=[ 105], 10.00th=[ 108], 20.00th=[ 113], 00:27:13.817 | 30.00th=[ 115], 40.00th=[ 121], 50.00th=[ 129], 60.00th=[ 133], 00:27:13.817 | 70.00th=[ 140], 80.00th=[ 146], 90.00th=[ 157], 95.00th=[ 161], 00:27:13.817 | 99.00th=[ 194], 99.50th=[ 213], 99.90th=[ 257], 99.95th=[ 257], 00:27:13.817 | 99.99th=[ 266] 00:27:13.817 bw ( KiB/s): min=100352, max=155648, per=7.39%, avg=125568.00, stdev=15337.41, samples=20 00:27:13.817 iops : min= 392, max= 608, avg=490.50, stdev=59.91, samples=20 00:27:13.817 lat (msec) : 20=0.30%, 50=1.55%, 100=1.77%, 250=96.26%, 500=0.12% 00:27:13.817 cpu : usr=1.11%, sys=1.35%, ctx=1372, majf=0, minf=1 00:27:13.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:27:13.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:13.817 issued rwts: total=0,4968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.817 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:13.817 job9: (groupid=0, jobs=1): err= 0: pid=142568: Wed May 15 20:19:06 2024 00:27:13.817 write: IOPS=576, BW=144MiB/s (151MB/s)(1458MiB/10121msec); 0 zone resets 00:27:13.817 slat (usec): min=24, max=13391, avg=1524.13, stdev=3020.21 00:27:13.817 clat (msec): min=11, max=245, avg=109.55, stdev=34.01 00:27:13.817 lat (msec): min=11, max=245, avg=111.07, stdev=34.50 00:27:13.817 clat percentiles (msec): 00:27:13.817 | 1.00th=[ 23], 5.00th=[ 44], 10.00th=[ 61], 20.00th=[ 75], 00:27:13.817 | 30.00th=[ 94], 40.00th=[ 115], 50.00th=[ 124], 60.00th=[ 129], 00:27:13.817 | 70.00th=[ 131], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 155], 00:27:13.817 | 99.00th=[ 161], 99.50th=[ 188], 99.90th=[ 239], 99.95th=[ 239], 00:27:13.817 | 99.99th=[ 247] 00:27:13.817 bw ( KiB/s): min=106496, max=224768, per=8.69%, avg=147635.20, stdev=39429.52, samples=20 00:27:13.817 iops : min= 416, max= 878, avg=576.70, stdev=154.02, samples=20 00:27:13.817 lat (msec) : 20=0.77%, 50=6.69%, 100=25.44%, 250=67.10% 00:27:13.817 cpu : usr=1.29%, sys=1.89%, ctx=2188, majf=0, minf=1 00:27:13.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:27:13.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:13.817 issued rwts: total=0,5830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.817 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:13.817 job10: (groupid=0, jobs=1): err= 0: pid=142571: Wed May 15 20:19:06 2024 00:27:13.817 write: IOPS=502, BW=126MiB/s (132MB/s)(1272MiB/10120msec); 0 zone resets 00:27:13.817 slat (usec): min=25, max=28519, avg=1868.46, stdev=3483.27 00:27:13.817 clat (msec): min=3, max=244, avg=125.43, stdev=29.94 00:27:13.817 lat (msec): min=3, max=244, avg=127.30, stdev=30.32 00:27:13.817 clat percentiles (msec): 00:27:13.817 | 1.00th=[ 17], 5.00th=[ 58], 10.00th=[ 86], 20.00th=[ 122], 00:27:13.817 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 131], 60.00th=[ 132], 00:27:13.817 | 70.00th=[ 134], 80.00th=[ 144], 90.00th=[ 155], 95.00th=[ 159], 00:27:13.817 | 99.00th=[ 180], 99.50th=[ 197], 99.90th=[ 236], 99.95th=[ 236], 00:27:13.817 | 99.99th=[ 245] 00:27:13.817 bw ( KiB/s): min=102400, max=203264, per=7.57%, avg=128604.45, stdev=23245.97, samples=20 00:27:13.817 iops : min= 400, max= 794, avg=502.35, stdev=90.79, samples=20 00:27:13.817 lat (msec) : 4=0.02%, 10=0.28%, 20=1.20%, 50=2.87%, 100=9.20% 00:27:13.817 lat (msec) : 250=86.43% 00:27:13.817 cpu : usr=1.02%, sys=1.64%, ctx=1643, majf=0, minf=1 00:27:13.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:13.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:13.817 issued rwts: total=0,5086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.817 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:13.817 00:27:13.817 Run status group 0 (all jobs): 00:27:13.817 WRITE: bw=1659MiB/s (1740MB/s), 123MiB/s-215MiB/s (129MB/s-225MB/s), io=16.4GiB (17.6GB), run=10063-10131msec 00:27:13.817 00:27:13.817 Disk stats (read/write): 00:27:13.817 nvme0n1: ios=49/13464, merge=0/0, ticks=163/1222313, in_queue=1222476, util=93.95% 00:27:13.817 nvme10n1: ios=29/10504, merge=0/0, ticks=72/1216229, in_queue=1216301, util=93.82% 00:27:13.817 nvme1n1: ios=40/11215, merge=0/0, ticks=2389/1191218, in_queue=1193607, util=99.98% 00:27:13.817 nvme2n1: ios=0/11396, merge=0/0, ticks=0/1219363, in_queue=1219363, util=94.78% 00:27:13.817 nvme3n1: ios=0/17134, merge=0/0, ticks=0/1223563, in_queue=1223563, util=95.02% 00:27:13.817 nvme4n1: ios=37/14853, merge=0/0, ticks=963/1214743, in_queue=1215706, util=100.00% 00:27:13.817 nvme5n1: ios=35/11452, merge=0/0, ticks=1345/1215773, in_queue=1217118, util=99.97% 00:27:13.817 nvme6n1: ios=0/11456, merge=0/0, ticks=0/1214600, in_queue=1214600, util=96.87% 00:27:13.817 nvme7n1: ios=0/9791, merge=0/0, ticks=0/1212320, in_queue=1212320, util=98.06% 00:27:13.817 nvme8n1: ios=0/11512, merge=0/0, ticks=0/1218166, in_queue=1218166, util=98.65% 00:27:13.817 nvme9n1: ios=0/10021, merge=0/0, ticks=0/1214732, in_queue=1214732, util=99.08% 00:27:13.817 20:19:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:13.817 20:19:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:13.817 20:19:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:13.817 20:19:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:14.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:14.078 20:19:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:14.078 20:19:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:14.078 20:19:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:14.078 20:19:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:27:14.078 20:19:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:14.078 20:19:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:27:14.078 20:19:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:14.078 20:19:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:14.078 20:19:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.078 20:19:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:14.078 20:19:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.078 20:19:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:14.078 20:19:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:14.651 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:14.651 20:19:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:14.651 20:19:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:14.651 20:19:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:14.651 20:19:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:27:14.651 20:19:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:14.651 20:19:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:27:14.651 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:14.651 20:19:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:14.651 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.651 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:14.651 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.651 20:19:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:14.651 20:19:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:14.912 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:14.912 20:19:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:14.912 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:14.912 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:14.912 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:27:14.912 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:27:14.912 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:14.912 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:14.912 20:19:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:14.912 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.912 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:14.912 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.912 20:19:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:14.912 20:19:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:15.173 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:15.173 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:15.173 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:27:15.434 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:15.434 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:27:15.434 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:15.434 20:19:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:15.434 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.434 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:15.434 20:19:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.434 20:19:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:15.434 20:19:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:15.695 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:15.695 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:15.695 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:15.956 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:15.956 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:16.217 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:16.217 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:16.217 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:16.217 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:16.217 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:27:16.217 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:16.217 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:27:16.217 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:16.217 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:16.217 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.217 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:16.217 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.217 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:16.217 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:16.478 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:16.478 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.478 20:19:08 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.478 rmmod nvme_tcp 00:27:16.739 rmmod nvme_fabrics 00:27:16.739 rmmod nvme_keyring 00:27:16.739 20:19:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.739 20:19:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:27:16.739 20:19:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:27:16.739 20:19:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 131172 ']' 00:27:16.739 20:19:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 131172 00:27:16.739 20:19:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 131172 ']' 00:27:16.739 20:19:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 131172 00:27:16.739 20:19:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:27:16.739 20:19:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:16.739 20:19:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 131172 00:27:16.739 20:19:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:16.739 20:19:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:16.739 20:19:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 131172' 00:27:16.739 killing process with pid 131172 00:27:16.739 20:19:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 131172 00:27:16.739 [2024-05-15 20:19:09.099104] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:16.739 20:19:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 131172 00:27:17.000 20:19:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:17.000 20:19:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:17.000 20:19:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:17.000 20:19:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:17.000 20:19:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:17.000 20:19:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.000 20:19:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.000 20:19:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.548 20:19:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:19.548 00:27:19.548 real 1m19.289s 00:27:19.548 user 4m56.451s 00:27:19.548 sys 0m23.518s 00:27:19.548 20:19:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:19.548 20:19:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:19.548 ************************************ 00:27:19.548 END TEST nvmf_multiconnection 00:27:19.548 ************************************ 00:27:19.548 20:19:11 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:19.548 20:19:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:19.548 20:19:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:19.548 20:19:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:19.548 ************************************ 00:27:19.548 START TEST nvmf_initiator_timeout 00:27:19.548 ************************************ 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:19.548 * Looking for test storage... 00:27:19.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.548 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:27:19.549 20:19:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:27.789 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:27.789 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:27.789 Found net devices under 0000:31:00.0: cvl_0_0 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:27.789 Found net devices under 0000:31:00.1: cvl_0_1 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.789 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:27.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:27.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:27:27.790 00:27:27.790 --- 10.0.0.2 ping statistics --- 00:27:27.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.790 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:27.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:27.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:27:27.790 00:27:27.790 --- 10.0.0.1 ping statistics --- 00:27:27.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.790 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=149525 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 149525 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 149525 ']' 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:27.790 20:19:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:27.790 [2024-05-15 20:19:19.910926] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:27:27.790 [2024-05-15 20:19:19.911021] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:27.790 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.790 [2024-05-15 20:19:20.002110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:27.790 [2024-05-15 20:19:20.070555] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:27.790 [2024-05-15 20:19:20.070593] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:27.790 [2024-05-15 20:19:20.070601] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:27.790 [2024-05-15 20:19:20.070607] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:27.790 [2024-05-15 20:19:20.070613] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:27.790 [2024-05-15 20:19:20.070735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.790 [2024-05-15 20:19:20.070861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:27.790 [2024-05-15 20:19:20.071006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.790 [2024-05-15 20:19:20.071007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:28.362 Malloc0 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:28.362 Delay0 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.362 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:28.362 [2024-05-15 20:19:20.856214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.623 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.623 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:28.623 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.623 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:28.623 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.623 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:28.623 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.623 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:28.623 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.623 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:28.623 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.623 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:28.623 [2024-05-15 20:19:20.896264] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:28.623 [2024-05-15 20:19:20.896499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.623 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.624 20:19:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:30.008 20:19:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:30.008 20:19:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:27:30.008 20:19:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:30.008 20:19:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:30.008 20:19:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:27:32.551 20:19:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:32.551 20:19:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:32.551 20:19:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:27:32.551 20:19:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:32.551 20:19:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:32.551 20:19:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:27:32.551 20:19:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=150257 00:27:32.551 20:19:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:32.551 20:19:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:32.551 [global] 00:27:32.551 thread=1 00:27:32.551 invalidate=1 00:27:32.551 rw=write 00:27:32.551 time_based=1 00:27:32.551 runtime=60 00:27:32.551 ioengine=libaio 00:27:32.551 direct=1 00:27:32.551 bs=4096 00:27:32.551 iodepth=1 00:27:32.551 norandommap=0 00:27:32.551 numjobs=1 00:27:32.551 00:27:32.551 verify_dump=1 00:27:32.551 verify_backlog=512 00:27:32.551 verify_state_save=0 00:27:32.551 do_verify=1 00:27:32.551 verify=crc32c-intel 00:27:32.551 [job0] 00:27:32.551 filename=/dev/nvme0n1 00:27:32.551 Could not set queue depth (nvme0n1) 00:27:32.551 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:32.551 fio-3.35 00:27:32.551 Starting 1 thread 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:35.096 true 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:35.096 true 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:35.096 true 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:35.096 true 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.096 20:19:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:38.398 true 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:38.398 true 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:38.398 true 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:38.398 true 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:38.398 20:19:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 150257 00:28:34.673 00:28:34.673 job0: (groupid=0, jobs=1): err= 0: pid=150576: Wed May 15 20:20:25 2024 00:28:34.673 read: IOPS=47, BW=190KiB/s (195kB/s)(11.2MiB/60042msec) 00:28:34.673 slat (usec): min=7, max=9961, avg=32.03, stdev=246.86 00:28:34.673 clat (usec): min=877, max=43526, avg=5494.37, stdev=12471.38 00:28:34.673 lat (usec): min=902, max=43552, avg=5526.40, stdev=12471.64 00:28:34.673 clat percentiles (usec): 00:28:34.673 | 1.00th=[ 1090], 5.00th=[ 1139], 10.00th=[ 1172], 20.00th=[ 1221], 00:28:34.673 | 30.00th=[ 1237], 40.00th=[ 1254], 50.00th=[ 1270], 60.00th=[ 1287], 00:28:34.673 | 70.00th=[ 1303], 80.00th=[ 1319], 90.00th=[41681], 95.00th=[42206], 00:28:34.673 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:28:34.673 | 99.99th=[43779] 00:28:34.673 write: IOPS=51, BW=205KiB/s (210kB/s)(12.0MiB/60042msec); 0 zone resets 00:28:34.673 slat (usec): min=9, max=33443, avg=43.08, stdev=602.85 00:28:34.673 clat (usec): min=340, max=41857k, avg=14347.42, stdev=755182.04 00:28:34.673 lat (usec): min=373, max=41857k, avg=14390.50, stdev=755182.12 00:28:34.673 clat percentiles (usec): 00:28:34.673 | 1.00th=[ 519], 5.00th=[ 603], 10.00th=[ 644], 00:28:34.673 | 20.00th=[ 676], 30.00th=[ 693], 40.00th=[ 709], 00:28:34.673 | 50.00th=[ 725], 60.00th=[ 742], 70.00th=[ 758], 00:28:34.673 | 80.00th=[ 775], 90.00th=[ 799], 95.00th=[ 816], 00:28:34.673 | 99.00th=[ 865], 99.50th=[ 889], 99.90th=[ 963], 00:28:34.673 | 99.95th=[ 1369], 99.99th=[17112761] 00:28:34.673 bw ( KiB/s): min= 16, max= 4096, per=100.00%, avg=2730.67, stdev=1891.06, samples=9 00:28:34.673 iops : min= 4, max= 1024, avg=682.67, stdev=472.77, samples=9 00:28:34.673 lat (usec) : 500=0.30%, 750=32.98%, 1000=18.56% 00:28:34.673 lat (msec) : 2=43.13%, 10=0.02%, 50=4.99%, >=2000=0.02% 00:28:34.674 cpu : usr=0.13%, sys=0.32%, ctx=5935, majf=0, minf=1 00:28:34.674 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:34.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.674 issued rwts: total=2856,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:34.674 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:34.674 00:28:34.674 Run status group 0 (all jobs): 00:28:34.674 READ: bw=190KiB/s (195kB/s), 190KiB/s-190KiB/s (195kB/s-195kB/s), io=11.2MiB (11.7MB), run=60042-60042msec 00:28:34.674 WRITE: bw=205KiB/s (210kB/s), 205KiB/s-205KiB/s (210kB/s-210kB/s), io=12.0MiB (12.6MB), run=60042-60042msec 00:28:34.674 00:28:34.674 Disk stats (read/write): 00:28:34.674 nvme0n1: ios=2906/3072, merge=0/0, ticks=16861/2195, in_queue=19056, util=100.00% 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:34.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:34.674 nvmf hotplug test: fio successful as expected 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:34.674 rmmod nvme_tcp 00:28:34.674 rmmod nvme_fabrics 00:28:34.674 rmmod nvme_keyring 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 149525 ']' 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 149525 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 149525 ']' 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 149525 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 149525 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 149525' 00:28:34.674 killing process with pid 149525 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 149525 00:28:34.674 [2024-05-15 20:20:25.386295] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 149525 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:34.674 20:20:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.245 20:20:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:35.245 00:28:35.245 real 1m16.065s 00:28:35.245 user 4m37.894s 00:28:35.245 sys 0m7.923s 00:28:35.246 20:20:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:35.246 20:20:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:35.246 ************************************ 00:28:35.246 END TEST nvmf_initiator_timeout 00:28:35.246 ************************************ 00:28:35.246 20:20:27 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:28:35.246 20:20:27 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:28:35.246 20:20:27 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:28:35.246 20:20:27 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:28:35.246 20:20:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.388 20:20:35 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.388 20:20:35 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:28:43.388 20:20:35 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:43.388 20:20:35 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:43.388 20:20:35 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:43.388 20:20:35 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:43.388 20:20:35 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:43.388 20:20:35 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:28:43.388 20:20:35 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:43.388 20:20:35 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:43.389 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:43.389 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:43.389 Found net devices under 0000:31:00.0: cvl_0_0 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:43.389 Found net devices under 0000:31:00.1: cvl_0_1 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:28:43.389 20:20:35 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:43.389 20:20:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:43.389 20:20:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:43.389 20:20:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:43.389 ************************************ 00:28:43.389 START TEST nvmf_perf_adq 00:28:43.389 ************************************ 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:43.389 * Looking for test storage... 00:28:43.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:43.389 20:20:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.531 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:51.532 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:51.532 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:51.532 Found net devices under 0000:31:00.0: cvl_0_0 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:51.532 Found net devices under 0000:31:00.1: cvl_0_1 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:28:51.532 20:20:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:52.917 20:20:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:55.465 20:20:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:00.761 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:00.761 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:00.761 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:00.762 Found net devices under 0000:31:00.0: cvl_0_0 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:00.762 Found net devices under 0000:31:00.1: cvl_0_1 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:00.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:29:00.762 00:29:00.762 --- 10.0.0.2 ping statistics --- 00:29:00.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.762 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:29:00.762 00:29:00.762 --- 10.0.0.1 ping statistics --- 00:29:00.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.762 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=172552 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 172552 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 172552 ']' 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:00.762 20:20:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.762 [2024-05-15 20:20:52.852174] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:29:00.762 [2024-05-15 20:20:52.852239] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.762 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.762 [2024-05-15 20:20:52.945971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:00.762 [2024-05-15 20:20:53.044346] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.762 [2024-05-15 20:20:53.044407] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.762 [2024-05-15 20:20:53.044415] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.762 [2024-05-15 20:20:53.044422] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.762 [2024-05-15 20:20:53.044429] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.762 [2024-05-15 20:20:53.044575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.762 [2024-05-15 20:20:53.044708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.762 [2024-05-15 20:20:53.044875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.762 [2024-05-15 20:20:53.044876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:01.334 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:01.334 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:29:01.334 20:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:01.334 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.334 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.334 20:20:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.334 20:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:29:01.334 20:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:01.334 20:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:01.334 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.334 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.334 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.334 20:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:01.334 20:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:01.334 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.335 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.335 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.335 20:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:01.335 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.335 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.595 [2024-05-15 20:20:53.910264] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.595 Malloc1 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.595 [2024-05-15 20:20:53.969389] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:01.595 [2024-05-15 20:20:53.969639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=172778 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:29:01.595 20:20:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:01.595 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.548 20:20:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:29:03.548 20:20:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.548 20:20:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:03.548 20:20:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.548 20:20:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:29:03.548 "tick_rate": 2400000000, 00:29:03.548 "poll_groups": [ 00:29:03.548 { 00:29:03.548 "name": "nvmf_tgt_poll_group_000", 00:29:03.548 "admin_qpairs": 1, 00:29:03.548 "io_qpairs": 1, 00:29:03.548 "current_admin_qpairs": 1, 00:29:03.548 "current_io_qpairs": 1, 00:29:03.548 "pending_bdev_io": 0, 00:29:03.548 "completed_nvme_io": 20587, 00:29:03.548 "transports": [ 00:29:03.548 { 00:29:03.548 "trtype": "TCP" 00:29:03.548 } 00:29:03.548 ] 00:29:03.548 }, 00:29:03.548 { 00:29:03.548 "name": "nvmf_tgt_poll_group_001", 00:29:03.548 "admin_qpairs": 0, 00:29:03.548 "io_qpairs": 1, 00:29:03.548 "current_admin_qpairs": 0, 00:29:03.548 "current_io_qpairs": 1, 00:29:03.548 "pending_bdev_io": 0, 00:29:03.548 "completed_nvme_io": 28995, 00:29:03.548 "transports": [ 00:29:03.548 { 00:29:03.548 "trtype": "TCP" 00:29:03.548 } 00:29:03.548 ] 00:29:03.548 }, 00:29:03.548 { 00:29:03.548 "name": "nvmf_tgt_poll_group_002", 00:29:03.548 "admin_qpairs": 0, 00:29:03.548 "io_qpairs": 1, 00:29:03.548 "current_admin_qpairs": 0, 00:29:03.548 "current_io_qpairs": 1, 00:29:03.548 "pending_bdev_io": 0, 00:29:03.548 "completed_nvme_io": 20126, 00:29:03.548 "transports": [ 00:29:03.548 { 00:29:03.548 "trtype": "TCP" 00:29:03.548 } 00:29:03.548 ] 00:29:03.548 }, 00:29:03.548 { 00:29:03.548 "name": "nvmf_tgt_poll_group_003", 00:29:03.548 "admin_qpairs": 0, 00:29:03.548 "io_qpairs": 1, 00:29:03.548 "current_admin_qpairs": 0, 00:29:03.548 "current_io_qpairs": 1, 00:29:03.548 "pending_bdev_io": 0, 00:29:03.548 "completed_nvme_io": 20587, 00:29:03.548 "transports": [ 00:29:03.548 { 00:29:03.548 "trtype": "TCP" 00:29:03.548 } 00:29:03.548 ] 00:29:03.548 } 00:29:03.548 ] 00:29:03.548 }' 00:29:03.548 20:20:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:29:03.548 20:20:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:29:03.809 20:20:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:29:03.809 20:20:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:29:03.809 20:20:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 172778 00:29:11.946 Initializing NVMe Controllers 00:29:11.946 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:11.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:11.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:11.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:11.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:11.946 Initialization complete. Launching workers. 00:29:11.946 ======================================================== 00:29:11.946 Latency(us) 00:29:11.946 Device Information : IOPS MiB/s Average min max 00:29:11.946 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10608.80 41.44 6034.32 1334.37 9860.09 00:29:11.946 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15126.50 59.09 4238.01 1291.84 43555.40 00:29:11.946 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10505.60 41.04 6091.81 1342.37 10351.38 00:29:11.946 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10774.50 42.09 5945.97 1810.70 42258.90 00:29:11.946 ======================================================== 00:29:11.946 Total : 47015.40 183.65 5448.98 1291.84 43555.40 00:29:11.946 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:11.946 rmmod nvme_tcp 00:29:11.946 rmmod nvme_fabrics 00:29:11.946 rmmod nvme_keyring 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 172552 ']' 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 172552 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 172552 ']' 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 172552 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 172552 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 172552' 00:29:11.946 killing process with pid 172552 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 172552 00:29:11.946 [2024-05-15 20:21:04.268597] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 172552 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:11.946 20:21:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.494 20:21:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:14.494 20:21:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:29:14.494 20:21:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:29:16.409 20:21:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:29:18.325 20:21:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:23.619 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:23.619 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:23.619 Found net devices under 0000:31:00.0: cvl_0_0 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:23.619 Found net devices under 0000:31:00.1: cvl_0_1 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:23.619 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:23.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:29:23.620 00:29:23.620 --- 10.0.0.2 ping statistics --- 00:29:23.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.620 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:29:23.620 00:29:23.620 --- 10.0.0.1 ping statistics --- 00:29:23.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.620 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:23.620 20:21:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:23.620 20:21:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:29:23.620 20:21:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:23.620 20:21:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:23.620 20:21:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:23.620 net.core.busy_poll = 1 00:29:23.620 20:21:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:23.620 net.core.busy_read = 1 00:29:23.620 20:21:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:23.620 20:21:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=178072 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 178072 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 178072 ']' 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:23.882 20:21:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:23.882 [2024-05-15 20:21:16.373307] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:29:23.882 [2024-05-15 20:21:16.373380] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.144 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.144 [2024-05-15 20:21:16.466440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:24.144 [2024-05-15 20:21:16.561138] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.144 [2024-05-15 20:21:16.561198] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.144 [2024-05-15 20:21:16.561206] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.144 [2024-05-15 20:21:16.561213] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.144 [2024-05-15 20:21:16.561219] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.144 [2024-05-15 20:21:16.561364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.144 [2024-05-15 20:21:16.561493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.144 [2024-05-15 20:21:16.561770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.144 [2024-05-15 20:21:16.561773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.086 [2024-05-15 20:21:17.441565] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.086 20:21:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.087 Malloc1 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.087 [2024-05-15 20:21:17.500690] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:25.087 [2024-05-15 20:21:17.500921] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=178180 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:29:25.087 20:21:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:25.087 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.632 20:21:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:29:27.632 20:21:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.632 20:21:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:27.632 20:21:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.632 20:21:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:29:27.632 "tick_rate": 2400000000, 00:29:27.632 "poll_groups": [ 00:29:27.632 { 00:29:27.632 "name": "nvmf_tgt_poll_group_000", 00:29:27.632 "admin_qpairs": 1, 00:29:27.632 "io_qpairs": 1, 00:29:27.632 "current_admin_qpairs": 1, 00:29:27.632 "current_io_qpairs": 1, 00:29:27.632 "pending_bdev_io": 0, 00:29:27.632 "completed_nvme_io": 27825, 00:29:27.632 "transports": [ 00:29:27.632 { 00:29:27.632 "trtype": "TCP" 00:29:27.632 } 00:29:27.632 ] 00:29:27.632 }, 00:29:27.632 { 00:29:27.632 "name": "nvmf_tgt_poll_group_001", 00:29:27.632 "admin_qpairs": 0, 00:29:27.632 "io_qpairs": 3, 00:29:27.632 "current_admin_qpairs": 0, 00:29:27.632 "current_io_qpairs": 3, 00:29:27.632 "pending_bdev_io": 0, 00:29:27.632 "completed_nvme_io": 41513, 00:29:27.632 "transports": [ 00:29:27.632 { 00:29:27.632 "trtype": "TCP" 00:29:27.632 } 00:29:27.632 ] 00:29:27.632 }, 00:29:27.632 { 00:29:27.632 "name": "nvmf_tgt_poll_group_002", 00:29:27.632 "admin_qpairs": 0, 00:29:27.632 "io_qpairs": 0, 00:29:27.632 "current_admin_qpairs": 0, 00:29:27.632 "current_io_qpairs": 0, 00:29:27.632 "pending_bdev_io": 0, 00:29:27.632 "completed_nvme_io": 0, 00:29:27.632 "transports": [ 00:29:27.632 { 00:29:27.632 "trtype": "TCP" 00:29:27.632 } 00:29:27.632 ] 00:29:27.632 }, 00:29:27.632 { 00:29:27.632 "name": "nvmf_tgt_poll_group_003", 00:29:27.632 "admin_qpairs": 0, 00:29:27.632 "io_qpairs": 0, 00:29:27.632 "current_admin_qpairs": 0, 00:29:27.632 "current_io_qpairs": 0, 00:29:27.632 "pending_bdev_io": 0, 00:29:27.632 "completed_nvme_io": 0, 00:29:27.632 "transports": [ 00:29:27.632 { 00:29:27.632 "trtype": "TCP" 00:29:27.632 } 00:29:27.632 ] 00:29:27.632 } 00:29:27.632 ] 00:29:27.632 }' 00:29:27.632 20:21:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:27.632 20:21:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:29:27.632 20:21:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:29:27.632 20:21:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:29:27.632 20:21:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 178180 00:29:35.786 Initializing NVMe Controllers 00:29:35.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:35.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:35.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:35.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:35.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:35.786 Initialization complete. Launching workers. 00:29:35.786 ======================================================== 00:29:35.786 Latency(us) 00:29:35.786 Device Information : IOPS MiB/s Average min max 00:29:35.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14727.37 57.53 4345.71 1271.97 6778.04 00:29:35.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7482.69 29.23 8552.68 1586.03 54091.01 00:29:35.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7094.79 27.71 9021.35 1550.57 54998.95 00:29:35.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7440.09 29.06 8629.43 1235.13 55019.28 00:29:35.786 ======================================================== 00:29:35.786 Total : 36744.94 143.53 6972.56 1235.13 55019.28 00:29:35.786 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:35.786 rmmod nvme_tcp 00:29:35.786 rmmod nvme_fabrics 00:29:35.786 rmmod nvme_keyring 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 178072 ']' 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 178072 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 178072 ']' 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 178072 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 178072 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 178072' 00:29:35.786 killing process with pid 178072 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 178072 00:29:35.786 [2024-05-15 20:21:27.832229] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 178072 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:35.786 20:21:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.086 20:21:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:39.086 20:21:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:39.086 00:29:39.086 real 0m55.704s 00:29:39.086 user 2m48.767s 00:29:39.086 sys 0m12.730s 00:29:39.086 20:21:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:39.086 20:21:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:39.086 ************************************ 00:29:39.086 END TEST nvmf_perf_adq 00:29:39.086 ************************************ 00:29:39.086 20:21:31 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:39.086 20:21:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:39.086 20:21:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:39.086 20:21:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.086 ************************************ 00:29:39.086 START TEST nvmf_shutdown 00:29:39.086 ************************************ 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:39.086 * Looking for test storage... 00:29:39.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.086 20:21:31 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:39.087 ************************************ 00:29:39.087 START TEST nvmf_shutdown_tc1 00:29:39.087 ************************************ 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:39.087 20:21:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:47.237 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:47.237 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:47.237 Found net devices under 0000:31:00.0: cvl_0_0 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:47.237 Found net devices under 0000:31:00.1: cvl_0_1 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:47.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:29:47.237 00:29:47.237 --- 10.0.0.2 ping statistics --- 00:29:47.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.237 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:29:47.237 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:29:47.238 00:29:47.238 --- 10.0.0.1 ping statistics --- 00:29:47.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.238 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=185139 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 185139 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 185139 ']' 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:47.238 20:21:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:47.238 [2024-05-15 20:21:39.633969] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:29:47.238 [2024-05-15 20:21:39.634031] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.238 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.238 [2024-05-15 20:21:39.711627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:47.500 [2024-05-15 20:21:39.785190] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.500 [2024-05-15 20:21:39.785230] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.500 [2024-05-15 20:21:39.785238] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.500 [2024-05-15 20:21:39.785245] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.500 [2024-05-15 20:21:39.785251] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.500 [2024-05-15 20:21:39.785398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.500 [2024-05-15 20:21:39.785545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.500 [2024-05-15 20:21:39.785703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.500 [2024-05-15 20:21:39.785704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:48.071 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:48.071 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:29:48.071 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:48.071 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.071 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:48.071 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.071 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:48.071 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.071 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:48.071 [2024-05-15 20:21:40.568227] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:48.332 20:21:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:48.332 Malloc1 00:29:48.332 [2024-05-15 20:21:40.671303] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:48.332 [2024-05-15 20:21:40.671527] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.332 Malloc2 00:29:48.332 Malloc3 00:29:48.332 Malloc4 00:29:48.332 Malloc5 00:29:48.593 Malloc6 00:29:48.593 Malloc7 00:29:48.593 Malloc8 00:29:48.593 Malloc9 00:29:48.593 Malloc10 00:29:48.593 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.593 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:48.593 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.593 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:48.593 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=185380 00:29:48.593 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 185380 /var/tmp/bdevperf.sock 00:29:48.593 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 185380 ']' 00:29:48.593 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:48.593 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:48.593 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:48.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:48.593 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:48.594 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:48.594 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:48.594 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:48.594 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:48.594 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:48.594 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.594 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.594 { 00:29:48.594 "params": { 00:29:48.594 "name": "Nvme$subsystem", 00:29:48.594 "trtype": "$TEST_TRANSPORT", 00:29:48.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.594 "adrfam": "ipv4", 00:29:48.594 "trsvcid": "$NVMF_PORT", 00:29:48.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.594 "hdgst": ${hdgst:-false}, 00:29:48.594 "ddgst": ${ddgst:-false} 00:29:48.594 }, 00:29:48.594 "method": "bdev_nvme_attach_controller" 00:29:48.594 } 00:29:48.594 EOF 00:29:48.594 )") 00:29:48.594 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:48.594 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.594 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.594 { 00:29:48.594 "params": { 00:29:48.594 "name": "Nvme$subsystem", 00:29:48.594 "trtype": "$TEST_TRANSPORT", 00:29:48.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.594 "adrfam": "ipv4", 00:29:48.594 "trsvcid": "$NVMF_PORT", 00:29:48.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.594 "hdgst": ${hdgst:-false}, 00:29:48.594 "ddgst": ${ddgst:-false} 00:29:48.594 }, 00:29:48.594 "method": "bdev_nvme_attach_controller" 00:29:48.594 } 00:29:48.594 EOF 00:29:48.594 )") 00:29:48.594 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:48.594 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.594 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.594 { 00:29:48.594 "params": { 00:29:48.594 "name": "Nvme$subsystem", 00:29:48.594 "trtype": "$TEST_TRANSPORT", 00:29:48.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.594 "adrfam": "ipv4", 00:29:48.594 "trsvcid": "$NVMF_PORT", 00:29:48.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.594 "hdgst": ${hdgst:-false}, 00:29:48.594 "ddgst": ${ddgst:-false} 00:29:48.594 }, 00:29:48.594 "method": "bdev_nvme_attach_controller" 00:29:48.594 } 00:29:48.594 EOF 00:29:48.594 )") 00:29:48.855 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:48.855 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.855 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.855 { 00:29:48.855 "params": { 00:29:48.855 "name": "Nvme$subsystem", 00:29:48.855 "trtype": "$TEST_TRANSPORT", 00:29:48.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.855 "adrfam": "ipv4", 00:29:48.855 "trsvcid": "$NVMF_PORT", 00:29:48.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.855 "hdgst": ${hdgst:-false}, 00:29:48.855 "ddgst": ${ddgst:-false} 00:29:48.855 }, 00:29:48.855 "method": "bdev_nvme_attach_controller" 00:29:48.855 } 00:29:48.855 EOF 00:29:48.855 )") 00:29:48.855 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:48.855 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.855 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.855 { 00:29:48.855 "params": { 00:29:48.855 "name": "Nvme$subsystem", 00:29:48.855 "trtype": "$TEST_TRANSPORT", 00:29:48.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.855 "adrfam": "ipv4", 00:29:48.855 "trsvcid": "$NVMF_PORT", 00:29:48.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.855 "hdgst": ${hdgst:-false}, 00:29:48.855 "ddgst": ${ddgst:-false} 00:29:48.855 }, 00:29:48.855 "method": "bdev_nvme_attach_controller" 00:29:48.855 } 00:29:48.855 EOF 00:29:48.855 )") 00:29:48.855 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:48.855 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.856 { 00:29:48.856 "params": { 00:29:48.856 "name": "Nvme$subsystem", 00:29:48.856 "trtype": "$TEST_TRANSPORT", 00:29:48.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.856 "adrfam": "ipv4", 00:29:48.856 "trsvcid": "$NVMF_PORT", 00:29:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.856 "hdgst": ${hdgst:-false}, 00:29:48.856 "ddgst": ${ddgst:-false} 00:29:48.856 }, 00:29:48.856 "method": "bdev_nvme_attach_controller" 00:29:48.856 } 00:29:48.856 EOF 00:29:48.856 )") 00:29:48.856 [2024-05-15 20:21:41.119460] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:29:48.856 [2024-05-15 20:21:41.119512] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.856 { 00:29:48.856 "params": { 00:29:48.856 "name": "Nvme$subsystem", 00:29:48.856 "trtype": "$TEST_TRANSPORT", 00:29:48.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.856 "adrfam": "ipv4", 00:29:48.856 "trsvcid": "$NVMF_PORT", 00:29:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.856 "hdgst": ${hdgst:-false}, 00:29:48.856 "ddgst": ${ddgst:-false} 00:29:48.856 }, 00:29:48.856 "method": "bdev_nvme_attach_controller" 00:29:48.856 } 00:29:48.856 EOF 00:29:48.856 )") 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.856 { 00:29:48.856 "params": { 00:29:48.856 "name": "Nvme$subsystem", 00:29:48.856 "trtype": "$TEST_TRANSPORT", 00:29:48.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.856 "adrfam": "ipv4", 00:29:48.856 "trsvcid": "$NVMF_PORT", 00:29:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.856 "hdgst": ${hdgst:-false}, 00:29:48.856 "ddgst": ${ddgst:-false} 00:29:48.856 }, 00:29:48.856 "method": "bdev_nvme_attach_controller" 00:29:48.856 } 00:29:48.856 EOF 00:29:48.856 )") 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.856 { 00:29:48.856 "params": { 00:29:48.856 "name": "Nvme$subsystem", 00:29:48.856 "trtype": "$TEST_TRANSPORT", 00:29:48.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.856 "adrfam": "ipv4", 00:29:48.856 "trsvcid": "$NVMF_PORT", 00:29:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.856 "hdgst": ${hdgst:-false}, 00:29:48.856 "ddgst": ${ddgst:-false} 00:29:48.856 }, 00:29:48.856 "method": "bdev_nvme_attach_controller" 00:29:48.856 } 00:29:48.856 EOF 00:29:48.856 )") 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.856 { 00:29:48.856 "params": { 00:29:48.856 "name": "Nvme$subsystem", 00:29:48.856 "trtype": "$TEST_TRANSPORT", 00:29:48.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.856 "adrfam": "ipv4", 00:29:48.856 "trsvcid": "$NVMF_PORT", 00:29:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.856 "hdgst": ${hdgst:-false}, 00:29:48.856 "ddgst": ${ddgst:-false} 00:29:48.856 }, 00:29:48.856 "method": "bdev_nvme_attach_controller" 00:29:48.856 } 00:29:48.856 EOF 00:29:48.856 )") 00:29:48.856 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:48.856 20:21:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:48.856 "params": { 00:29:48.856 "name": "Nvme1", 00:29:48.856 "trtype": "tcp", 00:29:48.856 "traddr": "10.0.0.2", 00:29:48.856 "adrfam": "ipv4", 00:29:48.856 "trsvcid": "4420", 00:29:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:48.856 "hdgst": false, 00:29:48.856 "ddgst": false 00:29:48.856 }, 00:29:48.856 "method": "bdev_nvme_attach_controller" 00:29:48.856 },{ 00:29:48.856 "params": { 00:29:48.856 "name": "Nvme2", 00:29:48.856 "trtype": "tcp", 00:29:48.856 "traddr": "10.0.0.2", 00:29:48.856 "adrfam": "ipv4", 00:29:48.856 "trsvcid": "4420", 00:29:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:48.856 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:48.856 "hdgst": false, 00:29:48.856 "ddgst": false 00:29:48.856 }, 00:29:48.856 "method": "bdev_nvme_attach_controller" 00:29:48.856 },{ 00:29:48.856 "params": { 00:29:48.856 "name": "Nvme3", 00:29:48.856 "trtype": "tcp", 00:29:48.856 "traddr": "10.0.0.2", 00:29:48.856 "adrfam": "ipv4", 00:29:48.856 "trsvcid": "4420", 00:29:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:48.856 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:48.856 "hdgst": false, 00:29:48.856 "ddgst": false 00:29:48.856 }, 00:29:48.856 "method": "bdev_nvme_attach_controller" 00:29:48.856 },{ 00:29:48.856 "params": { 00:29:48.856 "name": "Nvme4", 00:29:48.856 "trtype": "tcp", 00:29:48.856 "traddr": "10.0.0.2", 00:29:48.856 "adrfam": "ipv4", 00:29:48.856 "trsvcid": "4420", 00:29:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:48.856 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:48.856 "hdgst": false, 00:29:48.856 "ddgst": false 00:29:48.856 }, 00:29:48.856 "method": "bdev_nvme_attach_controller" 00:29:48.856 },{ 00:29:48.856 "params": { 00:29:48.856 "name": "Nvme5", 00:29:48.856 "trtype": "tcp", 00:29:48.856 "traddr": "10.0.0.2", 00:29:48.856 "adrfam": "ipv4", 00:29:48.856 "trsvcid": "4420", 00:29:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:48.856 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:48.856 "hdgst": false, 00:29:48.856 "ddgst": false 00:29:48.856 }, 00:29:48.856 "method": "bdev_nvme_attach_controller" 00:29:48.856 },{ 00:29:48.856 "params": { 00:29:48.856 "name": "Nvme6", 00:29:48.856 "trtype": "tcp", 00:29:48.856 "traddr": "10.0.0.2", 00:29:48.856 "adrfam": "ipv4", 00:29:48.856 "trsvcid": "4420", 00:29:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:48.856 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:48.856 "hdgst": false, 00:29:48.856 "ddgst": false 00:29:48.856 }, 00:29:48.856 "method": "bdev_nvme_attach_controller" 00:29:48.856 },{ 00:29:48.856 "params": { 00:29:48.856 "name": "Nvme7", 00:29:48.856 "trtype": "tcp", 00:29:48.856 "traddr": "10.0.0.2", 00:29:48.856 "adrfam": "ipv4", 00:29:48.856 "trsvcid": "4420", 00:29:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:48.856 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:48.856 "hdgst": false, 00:29:48.856 "ddgst": false 00:29:48.856 }, 00:29:48.856 "method": "bdev_nvme_attach_controller" 00:29:48.856 },{ 00:29:48.856 "params": { 00:29:48.856 "name": "Nvme8", 00:29:48.856 "trtype": "tcp", 00:29:48.856 "traddr": "10.0.0.2", 00:29:48.856 "adrfam": "ipv4", 00:29:48.856 "trsvcid": "4420", 00:29:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:48.856 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:48.856 "hdgst": false, 00:29:48.856 "ddgst": false 00:29:48.856 }, 00:29:48.856 "method": "bdev_nvme_attach_controller" 00:29:48.856 },{ 00:29:48.856 "params": { 00:29:48.856 "name": "Nvme9", 00:29:48.856 "trtype": "tcp", 00:29:48.856 "traddr": "10.0.0.2", 00:29:48.856 "adrfam": "ipv4", 00:29:48.856 "trsvcid": "4420", 00:29:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:48.856 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:48.856 "hdgst": false, 00:29:48.856 "ddgst": false 00:29:48.856 }, 00:29:48.856 "method": "bdev_nvme_attach_controller" 00:29:48.856 },{ 00:29:48.856 "params": { 00:29:48.856 "name": "Nvme10", 00:29:48.856 "trtype": "tcp", 00:29:48.856 "traddr": "10.0.0.2", 00:29:48.856 "adrfam": "ipv4", 00:29:48.856 "trsvcid": "4420", 00:29:48.856 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:48.856 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:48.856 "hdgst": false, 00:29:48.856 "ddgst": false 00:29:48.856 }, 00:29:48.856 "method": "bdev_nvme_attach_controller" 00:29:48.856 }' 00:29:48.856 [2024-05-15 20:21:41.204982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.856 [2024-05-15 20:21:41.269764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.241 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:50.241 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:29:50.241 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:50.241 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.241 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:50.241 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.241 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 185380 00:29:50.241 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:29:50.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 185380 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:50.241 20:21:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 185139 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.270 { 00:29:51.270 "params": { 00:29:51.270 "name": "Nvme$subsystem", 00:29:51.270 "trtype": "$TEST_TRANSPORT", 00:29:51.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.270 "adrfam": "ipv4", 00:29:51.270 "trsvcid": "$NVMF_PORT", 00:29:51.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.270 "hdgst": ${hdgst:-false}, 00:29:51.270 "ddgst": ${ddgst:-false} 00:29:51.270 }, 00:29:51.270 "method": "bdev_nvme_attach_controller" 00:29:51.270 } 00:29:51.270 EOF 00:29:51.270 )") 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.270 { 00:29:51.270 "params": { 00:29:51.270 "name": "Nvme$subsystem", 00:29:51.270 "trtype": "$TEST_TRANSPORT", 00:29:51.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.270 "adrfam": "ipv4", 00:29:51.270 "trsvcid": "$NVMF_PORT", 00:29:51.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.270 "hdgst": ${hdgst:-false}, 00:29:51.270 "ddgst": ${ddgst:-false} 00:29:51.270 }, 00:29:51.270 "method": "bdev_nvme_attach_controller" 00:29:51.270 } 00:29:51.270 EOF 00:29:51.270 )") 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.270 { 00:29:51.270 "params": { 00:29:51.270 "name": "Nvme$subsystem", 00:29:51.270 "trtype": "$TEST_TRANSPORT", 00:29:51.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.270 "adrfam": "ipv4", 00:29:51.270 "trsvcid": "$NVMF_PORT", 00:29:51.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.270 "hdgst": ${hdgst:-false}, 00:29:51.270 "ddgst": ${ddgst:-false} 00:29:51.270 }, 00:29:51.270 "method": "bdev_nvme_attach_controller" 00:29:51.270 } 00:29:51.270 EOF 00:29:51.270 )") 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.270 { 00:29:51.270 "params": { 00:29:51.270 "name": "Nvme$subsystem", 00:29:51.270 "trtype": "$TEST_TRANSPORT", 00:29:51.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.270 "adrfam": "ipv4", 00:29:51.270 "trsvcid": "$NVMF_PORT", 00:29:51.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.270 "hdgst": ${hdgst:-false}, 00:29:51.270 "ddgst": ${ddgst:-false} 00:29:51.270 }, 00:29:51.270 "method": "bdev_nvme_attach_controller" 00:29:51.270 } 00:29:51.270 EOF 00:29:51.270 )") 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.270 { 00:29:51.270 "params": { 00:29:51.270 "name": "Nvme$subsystem", 00:29:51.270 "trtype": "$TEST_TRANSPORT", 00:29:51.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.270 "adrfam": "ipv4", 00:29:51.270 "trsvcid": "$NVMF_PORT", 00:29:51.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.270 "hdgst": ${hdgst:-false}, 00:29:51.270 "ddgst": ${ddgst:-false} 00:29:51.270 }, 00:29:51.270 "method": "bdev_nvme_attach_controller" 00:29:51.270 } 00:29:51.270 EOF 00:29:51.270 )") 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.270 { 00:29:51.270 "params": { 00:29:51.270 "name": "Nvme$subsystem", 00:29:51.270 "trtype": "$TEST_TRANSPORT", 00:29:51.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.270 "adrfam": "ipv4", 00:29:51.270 "trsvcid": "$NVMF_PORT", 00:29:51.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.270 "hdgst": ${hdgst:-false}, 00:29:51.270 "ddgst": ${ddgst:-false} 00:29:51.270 }, 00:29:51.270 "method": "bdev_nvme_attach_controller" 00:29:51.270 } 00:29:51.270 EOF 00:29:51.270 )") 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:51.270 [2024-05-15 20:21:43.651458] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:29:51.270 [2024-05-15 20:21:43.651512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid186044 ] 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.270 { 00:29:51.270 "params": { 00:29:51.270 "name": "Nvme$subsystem", 00:29:51.270 "trtype": "$TEST_TRANSPORT", 00:29:51.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.270 "adrfam": "ipv4", 00:29:51.270 "trsvcid": "$NVMF_PORT", 00:29:51.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.270 "hdgst": ${hdgst:-false}, 00:29:51.270 "ddgst": ${ddgst:-false} 00:29:51.270 }, 00:29:51.270 "method": "bdev_nvme_attach_controller" 00:29:51.270 } 00:29:51.270 EOF 00:29:51.270 )") 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.270 { 00:29:51.270 "params": { 00:29:51.270 "name": "Nvme$subsystem", 00:29:51.270 "trtype": "$TEST_TRANSPORT", 00:29:51.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.270 "adrfam": "ipv4", 00:29:51.270 "trsvcid": "$NVMF_PORT", 00:29:51.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.270 "hdgst": ${hdgst:-false}, 00:29:51.270 "ddgst": ${ddgst:-false} 00:29:51.270 }, 00:29:51.270 "method": "bdev_nvme_attach_controller" 00:29:51.270 } 00:29:51.270 EOF 00:29:51.270 )") 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.270 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.270 { 00:29:51.270 "params": { 00:29:51.270 "name": "Nvme$subsystem", 00:29:51.270 "trtype": "$TEST_TRANSPORT", 00:29:51.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.271 "adrfam": "ipv4", 00:29:51.271 "trsvcid": "$NVMF_PORT", 00:29:51.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.271 "hdgst": ${hdgst:-false}, 00:29:51.271 "ddgst": ${ddgst:-false} 00:29:51.271 }, 00:29:51.271 "method": "bdev_nvme_attach_controller" 00:29:51.271 } 00:29:51.271 EOF 00:29:51.271 )") 00:29:51.271 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:51.271 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.271 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.271 { 00:29:51.271 "params": { 00:29:51.271 "name": "Nvme$subsystem", 00:29:51.271 "trtype": "$TEST_TRANSPORT", 00:29:51.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.271 "adrfam": "ipv4", 00:29:51.271 "trsvcid": "$NVMF_PORT", 00:29:51.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.271 "hdgst": ${hdgst:-false}, 00:29:51.271 "ddgst": ${ddgst:-false} 00:29:51.271 }, 00:29:51.271 "method": "bdev_nvme_attach_controller" 00:29:51.271 } 00:29:51.271 EOF 00:29:51.271 )") 00:29:51.271 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:51.271 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.271 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:51.271 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:51.271 20:21:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:51.271 "params": { 00:29:51.271 "name": "Nvme1", 00:29:51.271 "trtype": "tcp", 00:29:51.271 "traddr": "10.0.0.2", 00:29:51.271 "adrfam": "ipv4", 00:29:51.271 "trsvcid": "4420", 00:29:51.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:51.271 "hdgst": false, 00:29:51.271 "ddgst": false 00:29:51.271 }, 00:29:51.271 "method": "bdev_nvme_attach_controller" 00:29:51.271 },{ 00:29:51.271 "params": { 00:29:51.271 "name": "Nvme2", 00:29:51.271 "trtype": "tcp", 00:29:51.271 "traddr": "10.0.0.2", 00:29:51.271 "adrfam": "ipv4", 00:29:51.271 "trsvcid": "4420", 00:29:51.271 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:51.271 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:51.271 "hdgst": false, 00:29:51.271 "ddgst": false 00:29:51.271 }, 00:29:51.271 "method": "bdev_nvme_attach_controller" 00:29:51.271 },{ 00:29:51.271 "params": { 00:29:51.271 "name": "Nvme3", 00:29:51.271 "trtype": "tcp", 00:29:51.271 "traddr": "10.0.0.2", 00:29:51.271 "adrfam": "ipv4", 00:29:51.271 "trsvcid": "4420", 00:29:51.271 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:51.271 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:51.271 "hdgst": false, 00:29:51.271 "ddgst": false 00:29:51.271 }, 00:29:51.271 "method": "bdev_nvme_attach_controller" 00:29:51.271 },{ 00:29:51.271 "params": { 00:29:51.271 "name": "Nvme4", 00:29:51.271 "trtype": "tcp", 00:29:51.271 "traddr": "10.0.0.2", 00:29:51.271 "adrfam": "ipv4", 00:29:51.271 "trsvcid": "4420", 00:29:51.271 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:51.271 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:51.271 "hdgst": false, 00:29:51.271 "ddgst": false 00:29:51.271 }, 00:29:51.271 "method": "bdev_nvme_attach_controller" 00:29:51.271 },{ 00:29:51.271 "params": { 00:29:51.271 "name": "Nvme5", 00:29:51.271 "trtype": "tcp", 00:29:51.271 "traddr": "10.0.0.2", 00:29:51.271 "adrfam": "ipv4", 00:29:51.271 "trsvcid": "4420", 00:29:51.271 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:51.271 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:51.271 "hdgst": false, 00:29:51.271 "ddgst": false 00:29:51.271 }, 00:29:51.271 "method": "bdev_nvme_attach_controller" 00:29:51.271 },{ 00:29:51.271 "params": { 00:29:51.271 "name": "Nvme6", 00:29:51.271 "trtype": "tcp", 00:29:51.271 "traddr": "10.0.0.2", 00:29:51.271 "adrfam": "ipv4", 00:29:51.271 "trsvcid": "4420", 00:29:51.271 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:51.271 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:51.271 "hdgst": false, 00:29:51.271 "ddgst": false 00:29:51.271 }, 00:29:51.271 "method": "bdev_nvme_attach_controller" 00:29:51.271 },{ 00:29:51.271 "params": { 00:29:51.271 "name": "Nvme7", 00:29:51.271 "trtype": "tcp", 00:29:51.271 "traddr": "10.0.0.2", 00:29:51.271 "adrfam": "ipv4", 00:29:51.271 "trsvcid": "4420", 00:29:51.271 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:51.271 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:51.271 "hdgst": false, 00:29:51.271 "ddgst": false 00:29:51.271 }, 00:29:51.271 "method": "bdev_nvme_attach_controller" 00:29:51.271 },{ 00:29:51.271 "params": { 00:29:51.271 "name": "Nvme8", 00:29:51.271 "trtype": "tcp", 00:29:51.271 "traddr": "10.0.0.2", 00:29:51.271 "adrfam": "ipv4", 00:29:51.271 "trsvcid": "4420", 00:29:51.271 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:51.271 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:51.271 "hdgst": false, 00:29:51.271 "ddgst": false 00:29:51.271 }, 00:29:51.271 "method": "bdev_nvme_attach_controller" 00:29:51.271 },{ 00:29:51.271 "params": { 00:29:51.271 "name": "Nvme9", 00:29:51.271 "trtype": "tcp", 00:29:51.271 "traddr": "10.0.0.2", 00:29:51.271 "adrfam": "ipv4", 00:29:51.271 "trsvcid": "4420", 00:29:51.271 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:51.271 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:51.271 "hdgst": false, 00:29:51.271 "ddgst": false 00:29:51.271 }, 00:29:51.271 "method": "bdev_nvme_attach_controller" 00:29:51.271 },{ 00:29:51.271 "params": { 00:29:51.271 "name": "Nvme10", 00:29:51.271 "trtype": "tcp", 00:29:51.271 "traddr": "10.0.0.2", 00:29:51.271 "adrfam": "ipv4", 00:29:51.271 "trsvcid": "4420", 00:29:51.271 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:51.271 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:51.271 "hdgst": false, 00:29:51.271 "ddgst": false 00:29:51.271 }, 00:29:51.271 "method": "bdev_nvme_attach_controller" 00:29:51.271 }' 00:29:51.271 [2024-05-15 20:21:43.737385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.531 [2024-05-15 20:21:43.805818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.993 Running I/O for 1 seconds... 00:29:53.934 00:29:53.934 Latency(us) 00:29:53.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.934 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.934 Verification LBA range: start 0x0 length 0x400 00:29:53.934 Nvme1n1 : 1.10 235.71 14.73 0.00 0.00 268047.28 3604.48 219327.15 00:29:53.934 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.934 Verification LBA range: start 0x0 length 0x400 00:29:53.934 Nvme2n1 : 1.17 218.35 13.65 0.00 0.00 285505.28 20534.61 267386.88 00:29:53.934 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.934 Verification LBA range: start 0x0 length 0x400 00:29:53.934 Nvme3n1 : 1.09 234.25 14.64 0.00 0.00 256849.71 19114.67 244667.73 00:29:53.934 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.934 Verification LBA range: start 0x0 length 0x400 00:29:53.934 Nvme4n1 : 1.17 273.50 17.09 0.00 0.00 218754.22 10758.83 241172.48 00:29:53.934 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.934 Verification LBA range: start 0x0 length 0x400 00:29:53.934 Nvme5n1 : 1.14 223.96 14.00 0.00 0.00 263866.45 17367.04 249910.61 00:29:53.934 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.934 Verification LBA range: start 0x0 length 0x400 00:29:53.934 Nvme6n1 : 1.18 270.20 16.89 0.00 0.00 215586.82 23592.96 253405.87 00:29:53.934 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.934 Verification LBA range: start 0x0 length 0x400 00:29:53.934 Nvme7n1 : 1.19 268.61 16.79 0.00 0.00 213164.71 17257.81 244667.73 00:29:53.934 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.934 Verification LBA range: start 0x0 length 0x400 00:29:53.934 Nvme8n1 : 1.19 223.08 13.94 0.00 0.00 241147.49 5980.16 246415.36 00:29:53.934 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.934 Verification LBA range: start 0x0 length 0x400 00:29:53.934 Nvme9n1 : 1.20 266.93 16.68 0.00 0.00 206817.19 7700.48 279620.27 00:29:53.934 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.934 Verification LBA range: start 0x0 length 0x400 00:29:53.934 Nvme10n1 : 1.18 217.23 13.58 0.00 0.00 249059.41 21954.56 272629.76 00:29:53.934 =================================================================================================================== 00:29:53.934 Total : 2431.81 151.99 0.00 0.00 239353.70 3604.48 279620.27 00:29:54.195 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:29:54.195 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:54.195 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:54.195 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:54.195 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:54.195 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:54.195 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:29:54.195 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:54.195 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:29:54.195 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:54.195 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:54.195 rmmod nvme_tcp 00:29:54.195 rmmod nvme_fabrics 00:29:54.195 rmmod nvme_keyring 00:29:54.195 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:54.195 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:29:54.195 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:29:54.196 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 185139 ']' 00:29:54.196 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 185139 00:29:54.196 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 185139 ']' 00:29:54.196 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 185139 00:29:54.196 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:29:54.196 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:54.196 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 185139 00:29:54.456 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:54.456 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:54.456 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 185139' 00:29:54.456 killing process with pid 185139 00:29:54.456 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 185139 00:29:54.456 [2024-05-15 20:21:46.712189] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:54.456 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 185139 00:29:54.717 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:54.717 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:54.717 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:54.717 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:54.717 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:54.717 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.717 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:54.717 20:21:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.629 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:56.629 00:29:56.629 real 0m17.712s 00:29:56.629 user 0m35.074s 00:29:56.629 sys 0m7.363s 00:29:56.629 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:56.629 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:56.629 ************************************ 00:29:56.629 END TEST nvmf_shutdown_tc1 00:29:56.629 ************************************ 00:29:56.629 20:21:49 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:56.629 20:21:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:56.629 20:21:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:56.629 20:21:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:56.629 ************************************ 00:29:56.890 START TEST nvmf_shutdown_tc2 00:29:56.890 ************************************ 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:56.890 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:56.891 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:56.891 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:56.891 Found net devices under 0000:31:00.0: cvl_0_0 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:56.891 Found net devices under 0000:31:00.1: cvl_0_1 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:56.891 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:57.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:29:57.152 00:29:57.152 --- 10.0.0.2 ping statistics --- 00:29:57.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.152 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:29:57.152 00:29:57.152 --- 10.0.0.1 ping statistics --- 00:29:57.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.152 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=187164 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 187164 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 187164 ']' 00:29:57.152 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.153 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:57.153 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.153 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:57.153 20:21:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.153 [2024-05-15 20:21:49.604015] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:29:57.153 [2024-05-15 20:21:49.604077] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.153 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.413 [2024-05-15 20:21:49.681627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.413 [2024-05-15 20:21:49.756023] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.413 [2024-05-15 20:21:49.756058] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.413 [2024-05-15 20:21:49.756066] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.413 [2024-05-15 20:21:49.756072] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.413 [2024-05-15 20:21:49.756078] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.413 [2024-05-15 20:21:49.756182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.413 [2024-05-15 20:21:49.756358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.413 [2024-05-15 20:21:49.756519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.413 [2024-05-15 20:21:49.756519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:57.985 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:57.985 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:29:57.985 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:57.985 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:57.985 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.246 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.246 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:58.246 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.246 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.246 [2024-05-15 20:21:50.529208] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.246 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.246 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:58.246 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:58.246 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:58.246 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.247 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.247 Malloc1 00:29:58.247 [2024-05-15 20:21:50.632391] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:58.247 [2024-05-15 20:21:50.632622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.247 Malloc2 00:29:58.247 Malloc3 00:29:58.247 Malloc4 00:29:58.507 Malloc5 00:29:58.507 Malloc6 00:29:58.507 Malloc7 00:29:58.507 Malloc8 00:29:58.507 Malloc9 00:29:58.507 Malloc10 00:29:58.507 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.507 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:58.507 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:58.507 20:21:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=187553 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 187553 /var/tmp/bdevperf.sock 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 187553 ']' 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:58.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:58.768 { 00:29:58.768 "params": { 00:29:58.768 "name": "Nvme$subsystem", 00:29:58.768 "trtype": "$TEST_TRANSPORT", 00:29:58.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.768 "adrfam": "ipv4", 00:29:58.768 "trsvcid": "$NVMF_PORT", 00:29:58.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.768 "hdgst": ${hdgst:-false}, 00:29:58.768 "ddgst": ${ddgst:-false} 00:29:58.768 }, 00:29:58.768 "method": "bdev_nvme_attach_controller" 00:29:58.768 } 00:29:58.768 EOF 00:29:58.768 )") 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:58.768 { 00:29:58.768 "params": { 00:29:58.768 "name": "Nvme$subsystem", 00:29:58.768 "trtype": "$TEST_TRANSPORT", 00:29:58.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.768 "adrfam": "ipv4", 00:29:58.768 "trsvcid": "$NVMF_PORT", 00:29:58.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.768 "hdgst": ${hdgst:-false}, 00:29:58.768 "ddgst": ${ddgst:-false} 00:29:58.768 }, 00:29:58.768 "method": "bdev_nvme_attach_controller" 00:29:58.768 } 00:29:58.768 EOF 00:29:58.768 )") 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:58.768 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:58.768 { 00:29:58.768 "params": { 00:29:58.768 "name": "Nvme$subsystem", 00:29:58.768 "trtype": "$TEST_TRANSPORT", 00:29:58.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.768 "adrfam": "ipv4", 00:29:58.768 "trsvcid": "$NVMF_PORT", 00:29:58.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.768 "hdgst": ${hdgst:-false}, 00:29:58.768 "ddgst": ${ddgst:-false} 00:29:58.768 }, 00:29:58.768 "method": "bdev_nvme_attach_controller" 00:29:58.768 } 00:29:58.768 EOF 00:29:58.768 )") 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:58.769 { 00:29:58.769 "params": { 00:29:58.769 "name": "Nvme$subsystem", 00:29:58.769 "trtype": "$TEST_TRANSPORT", 00:29:58.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.769 "adrfam": "ipv4", 00:29:58.769 "trsvcid": "$NVMF_PORT", 00:29:58.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.769 "hdgst": ${hdgst:-false}, 00:29:58.769 "ddgst": ${ddgst:-false} 00:29:58.769 }, 00:29:58.769 "method": "bdev_nvme_attach_controller" 00:29:58.769 } 00:29:58.769 EOF 00:29:58.769 )") 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:58.769 { 00:29:58.769 "params": { 00:29:58.769 "name": "Nvme$subsystem", 00:29:58.769 "trtype": "$TEST_TRANSPORT", 00:29:58.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.769 "adrfam": "ipv4", 00:29:58.769 "trsvcid": "$NVMF_PORT", 00:29:58.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.769 "hdgst": ${hdgst:-false}, 00:29:58.769 "ddgst": ${ddgst:-false} 00:29:58.769 }, 00:29:58.769 "method": "bdev_nvme_attach_controller" 00:29:58.769 } 00:29:58.769 EOF 00:29:58.769 )") 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:58.769 { 00:29:58.769 "params": { 00:29:58.769 "name": "Nvme$subsystem", 00:29:58.769 "trtype": "$TEST_TRANSPORT", 00:29:58.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.769 "adrfam": "ipv4", 00:29:58.769 "trsvcid": "$NVMF_PORT", 00:29:58.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.769 "hdgst": ${hdgst:-false}, 00:29:58.769 "ddgst": ${ddgst:-false} 00:29:58.769 }, 00:29:58.769 "method": "bdev_nvme_attach_controller" 00:29:58.769 } 00:29:58.769 EOF 00:29:58.769 )") 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:58.769 [2024-05-15 20:21:51.079959] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:29:58.769 [2024-05-15 20:21:51.080010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187553 ] 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:58.769 { 00:29:58.769 "params": { 00:29:58.769 "name": "Nvme$subsystem", 00:29:58.769 "trtype": "$TEST_TRANSPORT", 00:29:58.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.769 "adrfam": "ipv4", 00:29:58.769 "trsvcid": "$NVMF_PORT", 00:29:58.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.769 "hdgst": ${hdgst:-false}, 00:29:58.769 "ddgst": ${ddgst:-false} 00:29:58.769 }, 00:29:58.769 "method": "bdev_nvme_attach_controller" 00:29:58.769 } 00:29:58.769 EOF 00:29:58.769 )") 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:58.769 { 00:29:58.769 "params": { 00:29:58.769 "name": "Nvme$subsystem", 00:29:58.769 "trtype": "$TEST_TRANSPORT", 00:29:58.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.769 "adrfam": "ipv4", 00:29:58.769 "trsvcid": "$NVMF_PORT", 00:29:58.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.769 "hdgst": ${hdgst:-false}, 00:29:58.769 "ddgst": ${ddgst:-false} 00:29:58.769 }, 00:29:58.769 "method": "bdev_nvme_attach_controller" 00:29:58.769 } 00:29:58.769 EOF 00:29:58.769 )") 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:58.769 { 00:29:58.769 "params": { 00:29:58.769 "name": "Nvme$subsystem", 00:29:58.769 "trtype": "$TEST_TRANSPORT", 00:29:58.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.769 "adrfam": "ipv4", 00:29:58.769 "trsvcid": "$NVMF_PORT", 00:29:58.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.769 "hdgst": ${hdgst:-false}, 00:29:58.769 "ddgst": ${ddgst:-false} 00:29:58.769 }, 00:29:58.769 "method": "bdev_nvme_attach_controller" 00:29:58.769 } 00:29:58.769 EOF 00:29:58.769 )") 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:58.769 { 00:29:58.769 "params": { 00:29:58.769 "name": "Nvme$subsystem", 00:29:58.769 "trtype": "$TEST_TRANSPORT", 00:29:58.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.769 "adrfam": "ipv4", 00:29:58.769 "trsvcid": "$NVMF_PORT", 00:29:58.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.769 "hdgst": ${hdgst:-false}, 00:29:58.769 "ddgst": ${ddgst:-false} 00:29:58.769 }, 00:29:58.769 "method": "bdev_nvme_attach_controller" 00:29:58.769 } 00:29:58.769 EOF 00:29:58.769 )") 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:58.769 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:29:58.769 20:21:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:58.769 "params": { 00:29:58.769 "name": "Nvme1", 00:29:58.769 "trtype": "tcp", 00:29:58.769 "traddr": "10.0.0.2", 00:29:58.769 "adrfam": "ipv4", 00:29:58.769 "trsvcid": "4420", 00:29:58.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:58.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:58.769 "hdgst": false, 00:29:58.769 "ddgst": false 00:29:58.769 }, 00:29:58.769 "method": "bdev_nvme_attach_controller" 00:29:58.769 },{ 00:29:58.769 "params": { 00:29:58.769 "name": "Nvme2", 00:29:58.769 "trtype": "tcp", 00:29:58.769 "traddr": "10.0.0.2", 00:29:58.769 "adrfam": "ipv4", 00:29:58.769 "trsvcid": "4420", 00:29:58.769 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:58.769 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:58.769 "hdgst": false, 00:29:58.769 "ddgst": false 00:29:58.769 }, 00:29:58.769 "method": "bdev_nvme_attach_controller" 00:29:58.769 },{ 00:29:58.769 "params": { 00:29:58.769 "name": "Nvme3", 00:29:58.769 "trtype": "tcp", 00:29:58.769 "traddr": "10.0.0.2", 00:29:58.769 "adrfam": "ipv4", 00:29:58.769 "trsvcid": "4420", 00:29:58.769 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:58.769 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:58.769 "hdgst": false, 00:29:58.769 "ddgst": false 00:29:58.769 }, 00:29:58.769 "method": "bdev_nvme_attach_controller" 00:29:58.769 },{ 00:29:58.769 "params": { 00:29:58.769 "name": "Nvme4", 00:29:58.769 "trtype": "tcp", 00:29:58.769 "traddr": "10.0.0.2", 00:29:58.769 "adrfam": "ipv4", 00:29:58.769 "trsvcid": "4420", 00:29:58.769 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:58.769 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:58.769 "hdgst": false, 00:29:58.769 "ddgst": false 00:29:58.769 }, 00:29:58.769 "method": "bdev_nvme_attach_controller" 00:29:58.769 },{ 00:29:58.769 "params": { 00:29:58.769 "name": "Nvme5", 00:29:58.769 "trtype": "tcp", 00:29:58.769 "traddr": "10.0.0.2", 00:29:58.769 "adrfam": "ipv4", 00:29:58.769 "trsvcid": "4420", 00:29:58.769 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:58.769 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:58.769 "hdgst": false, 00:29:58.769 "ddgst": false 00:29:58.769 }, 00:29:58.769 "method": "bdev_nvme_attach_controller" 00:29:58.769 },{ 00:29:58.769 "params": { 00:29:58.769 "name": "Nvme6", 00:29:58.769 "trtype": "tcp", 00:29:58.769 "traddr": "10.0.0.2", 00:29:58.769 "adrfam": "ipv4", 00:29:58.769 "trsvcid": "4420", 00:29:58.769 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:58.769 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:58.769 "hdgst": false, 00:29:58.769 "ddgst": false 00:29:58.769 }, 00:29:58.769 "method": "bdev_nvme_attach_controller" 00:29:58.769 },{ 00:29:58.769 "params": { 00:29:58.769 "name": "Nvme7", 00:29:58.769 "trtype": "tcp", 00:29:58.769 "traddr": "10.0.0.2", 00:29:58.769 "adrfam": "ipv4", 00:29:58.769 "trsvcid": "4420", 00:29:58.769 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:58.769 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:58.769 "hdgst": false, 00:29:58.769 "ddgst": false 00:29:58.769 }, 00:29:58.769 "method": "bdev_nvme_attach_controller" 00:29:58.770 },{ 00:29:58.770 "params": { 00:29:58.770 "name": "Nvme8", 00:29:58.770 "trtype": "tcp", 00:29:58.770 "traddr": "10.0.0.2", 00:29:58.770 "adrfam": "ipv4", 00:29:58.770 "trsvcid": "4420", 00:29:58.770 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:58.770 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:58.770 "hdgst": false, 00:29:58.770 "ddgst": false 00:29:58.770 }, 00:29:58.770 "method": "bdev_nvme_attach_controller" 00:29:58.770 },{ 00:29:58.770 "params": { 00:29:58.770 "name": "Nvme9", 00:29:58.770 "trtype": "tcp", 00:29:58.770 "traddr": "10.0.0.2", 00:29:58.770 "adrfam": "ipv4", 00:29:58.770 "trsvcid": "4420", 00:29:58.770 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:58.770 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:58.770 "hdgst": false, 00:29:58.770 "ddgst": false 00:29:58.770 }, 00:29:58.770 "method": "bdev_nvme_attach_controller" 00:29:58.770 },{ 00:29:58.770 "params": { 00:29:58.770 "name": "Nvme10", 00:29:58.770 "trtype": "tcp", 00:29:58.770 "traddr": "10.0.0.2", 00:29:58.770 "adrfam": "ipv4", 00:29:58.770 "trsvcid": "4420", 00:29:58.770 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:58.770 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:58.770 "hdgst": false, 00:29:58.770 "ddgst": false 00:29:58.770 }, 00:29:58.770 "method": "bdev_nvme_attach_controller" 00:29:58.770 }' 00:29:58.770 [2024-05-15 20:21:51.164610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.770 [2024-05-15 20:21:51.229320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.682 Running I/O for 10 seconds... 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.682 20:21:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.682 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:30:00.682 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:30:00.682 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:00.943 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:00.943 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:00.943 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:00.943 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:00.943 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:00.943 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:00.943 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:00.943 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:30:00.943 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:30:00.943 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 187553 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 187553 ']' 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 187553 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 187553 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 187553' 00:30:01.204 killing process with pid 187553 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 187553 00:30:01.204 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 187553 00:30:01.464 Received shutdown signal, test time was about 0.960414 seconds 00:30:01.464 00:30:01.464 Latency(us) 00:30:01.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.464 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:01.464 Verification LBA range: start 0x0 length 0x400 00:30:01.464 Nvme1n1 : 0.96 264.71 16.54 0.00 0.00 238523.95 17913.17 255153.49 00:30:01.464 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:01.464 Verification LBA range: start 0x0 length 0x400 00:30:01.464 Nvme2n1 : 0.92 208.88 13.05 0.00 0.00 296116.91 20316.16 251658.24 00:30:01.464 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:01.464 Verification LBA range: start 0x0 length 0x400 00:30:01.464 Nvme3n1 : 0.95 269.97 16.87 0.00 0.00 224003.20 15619.41 255153.49 00:30:01.464 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:01.464 Verification LBA range: start 0x0 length 0x400 00:30:01.464 Nvme4n1 : 0.96 268.02 16.75 0.00 0.00 221250.99 19005.44 256901.12 00:30:01.464 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:01.464 Verification LBA range: start 0x0 length 0x400 00:30:01.464 Nvme5n1 : 0.93 205.67 12.85 0.00 0.00 279453.30 20097.71 251658.24 00:30:01.465 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:01.465 Verification LBA range: start 0x0 length 0x400 00:30:01.465 Nvme6n1 : 0.93 206.21 12.89 0.00 0.00 274212.98 23265.28 255153.49 00:30:01.465 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:01.465 Verification LBA range: start 0x0 length 0x400 00:30:01.465 Nvme7n1 : 0.94 278.69 17.42 0.00 0.00 197229.70 6853.97 255153.49 00:30:01.465 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:01.465 Verification LBA range: start 0x0 length 0x400 00:30:01.465 Nvme8n1 : 0.94 204.74 12.80 0.00 0.00 263526.68 27197.44 237677.23 00:30:01.465 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:01.465 Verification LBA range: start 0x0 length 0x400 00:30:01.465 Nvme9n1 : 0.94 203.50 12.72 0.00 0.00 259087.93 22828.37 256901.12 00:30:01.465 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:01.465 Verification LBA range: start 0x0 length 0x400 00:30:01.465 Nvme10n1 : 0.95 202.27 12.64 0.00 0.00 254337.71 25995.95 284863.15 00:30:01.465 =================================================================================================================== 00:30:01.465 Total : 2312.67 144.54 0.00 0.00 247053.92 6853.97 284863.15 00:30:01.465 20:21:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:30:02.407 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 187164 00:30:02.407 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:30:02.407 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:02.407 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:02.407 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:02.407 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:02.407 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:02.407 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:30:02.407 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:02.407 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:30:02.407 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:02.407 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:02.407 rmmod nvme_tcp 00:30:02.668 rmmod nvme_fabrics 00:30:02.668 rmmod nvme_keyring 00:30:02.668 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:02.668 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:30:02.668 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:30:02.668 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 187164 ']' 00:30:02.668 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 187164 00:30:02.668 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 187164 ']' 00:30:02.668 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 187164 00:30:02.668 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:30:02.668 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:02.668 20:21:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 187164 00:30:02.668 20:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:02.668 20:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:02.668 20:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 187164' 00:30:02.668 killing process with pid 187164 00:30:02.668 20:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 187164 00:30:02.668 [2024-05-15 20:21:55.011179] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:02.668 20:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 187164 00:30:02.930 20:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:02.930 20:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:02.930 20:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:02.930 20:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:02.930 20:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:02.930 20:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.930 20:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:02.930 20:21:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.845 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:04.845 00:30:04.845 real 0m8.200s 00:30:04.845 user 0m25.087s 00:30:04.845 sys 0m1.321s 00:30:04.845 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:04.845 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:04.845 ************************************ 00:30:04.845 END TEST nvmf_shutdown_tc2 00:30:04.845 ************************************ 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:05.106 ************************************ 00:30:05.106 START TEST nvmf_shutdown_tc3 00:30:05.106 ************************************ 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:05.106 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:05.107 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:05.107 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:05.107 Found net devices under 0000:31:00.0: cvl_0_0 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:05.107 Found net devices under 0000:31:00.1: cvl_0_1 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.107 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:05.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:30:05.369 00:30:05.369 --- 10.0.0.2 ping statistics --- 00:30:05.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.369 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:30:05.369 00:30:05.369 --- 10.0.0.1 ping statistics --- 00:30:05.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.369 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=189010 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 189010 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 189010 ']' 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:05.369 20:21:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:05.630 [2024-05-15 20:21:57.883219] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:30:05.630 [2024-05-15 20:21:57.883268] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.630 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.630 [2024-05-15 20:21:57.956788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.630 [2024-05-15 20:21:58.021599] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.630 [2024-05-15 20:21:58.021636] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.630 [2024-05-15 20:21:58.021643] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.630 [2024-05-15 20:21:58.021649] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.630 [2024-05-15 20:21:58.021655] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.630 [2024-05-15 20:21:58.021762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.630 [2024-05-15 20:21:58.021918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:05.630 [2024-05-15 20:21:58.022073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.630 [2024-05-15 20:21:58.022074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:06.570 [2024-05-15 20:21:58.792183] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.570 20:21:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:06.570 Malloc1 00:30:06.570 [2024-05-15 20:21:58.895473] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:06.570 [2024-05-15 20:21:58.895722] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.570 Malloc2 00:30:06.570 Malloc3 00:30:06.570 Malloc4 00:30:06.570 Malloc5 00:30:06.570 Malloc6 00:30:06.830 Malloc7 00:30:06.830 Malloc8 00:30:06.830 Malloc9 00:30:06.830 Malloc10 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=189268 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 189268 /var/tmp/bdevperf.sock 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 189268 ']' 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:06.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:06.830 { 00:30:06.830 "params": { 00:30:06.830 "name": "Nvme$subsystem", 00:30:06.830 "trtype": "$TEST_TRANSPORT", 00:30:06.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.830 "adrfam": "ipv4", 00:30:06.830 "trsvcid": "$NVMF_PORT", 00:30:06.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.830 "hdgst": ${hdgst:-false}, 00:30:06.830 "ddgst": ${ddgst:-false} 00:30:06.830 }, 00:30:06.830 "method": "bdev_nvme_attach_controller" 00:30:06.830 } 00:30:06.830 EOF 00:30:06.830 )") 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:06.830 { 00:30:06.830 "params": { 00:30:06.830 "name": "Nvme$subsystem", 00:30:06.830 "trtype": "$TEST_TRANSPORT", 00:30:06.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.830 "adrfam": "ipv4", 00:30:06.830 "trsvcid": "$NVMF_PORT", 00:30:06.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.830 "hdgst": ${hdgst:-false}, 00:30:06.830 "ddgst": ${ddgst:-false} 00:30:06.830 }, 00:30:06.830 "method": "bdev_nvme_attach_controller" 00:30:06.830 } 00:30:06.830 EOF 00:30:06.830 )") 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:06.830 { 00:30:06.830 "params": { 00:30:06.830 "name": "Nvme$subsystem", 00:30:06.830 "trtype": "$TEST_TRANSPORT", 00:30:06.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.830 "adrfam": "ipv4", 00:30:06.830 "trsvcid": "$NVMF_PORT", 00:30:06.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.830 "hdgst": ${hdgst:-false}, 00:30:06.830 "ddgst": ${ddgst:-false} 00:30:06.830 }, 00:30:06.830 "method": "bdev_nvme_attach_controller" 00:30:06.830 } 00:30:06.830 EOF 00:30:06.830 )") 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:06.830 { 00:30:06.830 "params": { 00:30:06.830 "name": "Nvme$subsystem", 00:30:06.830 "trtype": "$TEST_TRANSPORT", 00:30:06.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.830 "adrfam": "ipv4", 00:30:06.830 "trsvcid": "$NVMF_PORT", 00:30:06.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.830 "hdgst": ${hdgst:-false}, 00:30:06.830 "ddgst": ${ddgst:-false} 00:30:06.830 }, 00:30:06.830 "method": "bdev_nvme_attach_controller" 00:30:06.830 } 00:30:06.830 EOF 00:30:06.830 )") 00:30:06.830 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:07.091 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:07.091 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:07.091 { 00:30:07.091 "params": { 00:30:07.091 "name": "Nvme$subsystem", 00:30:07.091 "trtype": "$TEST_TRANSPORT", 00:30:07.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.091 "adrfam": "ipv4", 00:30:07.091 "trsvcid": "$NVMF_PORT", 00:30:07.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.091 "hdgst": ${hdgst:-false}, 00:30:07.091 "ddgst": ${ddgst:-false} 00:30:07.091 }, 00:30:07.091 "method": "bdev_nvme_attach_controller" 00:30:07.091 } 00:30:07.091 EOF 00:30:07.091 )") 00:30:07.091 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:07.091 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:07.091 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:07.091 { 00:30:07.091 "params": { 00:30:07.091 "name": "Nvme$subsystem", 00:30:07.091 "trtype": "$TEST_TRANSPORT", 00:30:07.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.091 "adrfam": "ipv4", 00:30:07.091 "trsvcid": "$NVMF_PORT", 00:30:07.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.091 "hdgst": ${hdgst:-false}, 00:30:07.091 "ddgst": ${ddgst:-false} 00:30:07.091 }, 00:30:07.091 "method": "bdev_nvme_attach_controller" 00:30:07.091 } 00:30:07.091 EOF 00:30:07.091 )") 00:30:07.091 [2024-05-15 20:21:59.342561] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:30:07.091 [2024-05-15 20:21:59.342619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid189268 ] 00:30:07.091 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:07.091 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:07.091 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:07.091 { 00:30:07.091 "params": { 00:30:07.091 "name": "Nvme$subsystem", 00:30:07.091 "trtype": "$TEST_TRANSPORT", 00:30:07.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.091 "adrfam": "ipv4", 00:30:07.091 "trsvcid": "$NVMF_PORT", 00:30:07.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.091 "hdgst": ${hdgst:-false}, 00:30:07.091 "ddgst": ${ddgst:-false} 00:30:07.091 }, 00:30:07.091 "method": "bdev_nvme_attach_controller" 00:30:07.091 } 00:30:07.091 EOF 00:30:07.091 )") 00:30:07.091 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:07.091 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:07.091 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:07.091 { 00:30:07.091 "params": { 00:30:07.091 "name": "Nvme$subsystem", 00:30:07.091 "trtype": "$TEST_TRANSPORT", 00:30:07.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.091 "adrfam": "ipv4", 00:30:07.091 "trsvcid": "$NVMF_PORT", 00:30:07.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.091 "hdgst": ${hdgst:-false}, 00:30:07.091 "ddgst": ${ddgst:-false} 00:30:07.091 }, 00:30:07.091 "method": "bdev_nvme_attach_controller" 00:30:07.091 } 00:30:07.091 EOF 00:30:07.091 )") 00:30:07.091 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:07.091 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:07.091 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:07.092 { 00:30:07.092 "params": { 00:30:07.092 "name": "Nvme$subsystem", 00:30:07.092 "trtype": "$TEST_TRANSPORT", 00:30:07.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.092 "adrfam": "ipv4", 00:30:07.092 "trsvcid": "$NVMF_PORT", 00:30:07.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.092 "hdgst": ${hdgst:-false}, 00:30:07.092 "ddgst": ${ddgst:-false} 00:30:07.092 }, 00:30:07.092 "method": "bdev_nvme_attach_controller" 00:30:07.092 } 00:30:07.092 EOF 00:30:07.092 )") 00:30:07.092 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:07.092 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:07.092 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:07.092 { 00:30:07.092 "params": { 00:30:07.092 "name": "Nvme$subsystem", 00:30:07.092 "trtype": "$TEST_TRANSPORT", 00:30:07.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.092 "adrfam": "ipv4", 00:30:07.092 "trsvcid": "$NVMF_PORT", 00:30:07.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.092 "hdgst": ${hdgst:-false}, 00:30:07.092 "ddgst": ${ddgst:-false} 00:30:07.092 }, 00:30:07.092 "method": "bdev_nvme_attach_controller" 00:30:07.092 } 00:30:07.092 EOF 00:30:07.092 )") 00:30:07.092 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.092 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:07.092 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:30:07.092 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:30:07.092 20:21:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:07.092 "params": { 00:30:07.092 "name": "Nvme1", 00:30:07.092 "trtype": "tcp", 00:30:07.092 "traddr": "10.0.0.2", 00:30:07.092 "adrfam": "ipv4", 00:30:07.092 "trsvcid": "4420", 00:30:07.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:07.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:07.092 "hdgst": false, 00:30:07.092 "ddgst": false 00:30:07.092 }, 00:30:07.092 "method": "bdev_nvme_attach_controller" 00:30:07.092 },{ 00:30:07.092 "params": { 00:30:07.092 "name": "Nvme2", 00:30:07.092 "trtype": "tcp", 00:30:07.092 "traddr": "10.0.0.2", 00:30:07.092 "adrfam": "ipv4", 00:30:07.092 "trsvcid": "4420", 00:30:07.092 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:07.092 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:07.092 "hdgst": false, 00:30:07.092 "ddgst": false 00:30:07.092 }, 00:30:07.092 "method": "bdev_nvme_attach_controller" 00:30:07.092 },{ 00:30:07.092 "params": { 00:30:07.092 "name": "Nvme3", 00:30:07.092 "trtype": "tcp", 00:30:07.092 "traddr": "10.0.0.2", 00:30:07.092 "adrfam": "ipv4", 00:30:07.092 "trsvcid": "4420", 00:30:07.092 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:07.092 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:07.092 "hdgst": false, 00:30:07.092 "ddgst": false 00:30:07.092 }, 00:30:07.092 "method": "bdev_nvme_attach_controller" 00:30:07.092 },{ 00:30:07.092 "params": { 00:30:07.092 "name": "Nvme4", 00:30:07.092 "trtype": "tcp", 00:30:07.092 "traddr": "10.0.0.2", 00:30:07.092 "adrfam": "ipv4", 00:30:07.092 "trsvcid": "4420", 00:30:07.092 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:07.092 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:07.092 "hdgst": false, 00:30:07.092 "ddgst": false 00:30:07.092 }, 00:30:07.092 "method": "bdev_nvme_attach_controller" 00:30:07.092 },{ 00:30:07.092 "params": { 00:30:07.092 "name": "Nvme5", 00:30:07.092 "trtype": "tcp", 00:30:07.092 "traddr": "10.0.0.2", 00:30:07.092 "adrfam": "ipv4", 00:30:07.092 "trsvcid": "4420", 00:30:07.092 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:07.092 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:07.092 "hdgst": false, 00:30:07.092 "ddgst": false 00:30:07.092 }, 00:30:07.092 "method": "bdev_nvme_attach_controller" 00:30:07.092 },{ 00:30:07.092 "params": { 00:30:07.092 "name": "Nvme6", 00:30:07.092 "trtype": "tcp", 00:30:07.092 "traddr": "10.0.0.2", 00:30:07.092 "adrfam": "ipv4", 00:30:07.092 "trsvcid": "4420", 00:30:07.092 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:07.092 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:07.092 "hdgst": false, 00:30:07.092 "ddgst": false 00:30:07.092 }, 00:30:07.092 "method": "bdev_nvme_attach_controller" 00:30:07.092 },{ 00:30:07.092 "params": { 00:30:07.092 "name": "Nvme7", 00:30:07.092 "trtype": "tcp", 00:30:07.092 "traddr": "10.0.0.2", 00:30:07.092 "adrfam": "ipv4", 00:30:07.092 "trsvcid": "4420", 00:30:07.092 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:07.092 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:07.092 "hdgst": false, 00:30:07.092 "ddgst": false 00:30:07.092 }, 00:30:07.092 "method": "bdev_nvme_attach_controller" 00:30:07.092 },{ 00:30:07.092 "params": { 00:30:07.092 "name": "Nvme8", 00:30:07.092 "trtype": "tcp", 00:30:07.092 "traddr": "10.0.0.2", 00:30:07.092 "adrfam": "ipv4", 00:30:07.092 "trsvcid": "4420", 00:30:07.092 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:07.092 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:07.092 "hdgst": false, 00:30:07.092 "ddgst": false 00:30:07.092 }, 00:30:07.092 "method": "bdev_nvme_attach_controller" 00:30:07.092 },{ 00:30:07.092 "params": { 00:30:07.092 "name": "Nvme9", 00:30:07.092 "trtype": "tcp", 00:30:07.092 "traddr": "10.0.0.2", 00:30:07.092 "adrfam": "ipv4", 00:30:07.092 "trsvcid": "4420", 00:30:07.092 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:07.092 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:07.092 "hdgst": false, 00:30:07.092 "ddgst": false 00:30:07.092 }, 00:30:07.092 "method": "bdev_nvme_attach_controller" 00:30:07.092 },{ 00:30:07.092 "params": { 00:30:07.092 "name": "Nvme10", 00:30:07.092 "trtype": "tcp", 00:30:07.092 "traddr": "10.0.0.2", 00:30:07.092 "adrfam": "ipv4", 00:30:07.092 "trsvcid": "4420", 00:30:07.092 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:07.092 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:07.092 "hdgst": false, 00:30:07.092 "ddgst": false 00:30:07.092 }, 00:30:07.092 "method": "bdev_nvme_attach_controller" 00:30:07.092 }' 00:30:07.092 [2024-05-15 20:21:59.426831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.092 [2024-05-15 20:21:59.491671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.006 Running I/O for 10 seconds... 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:09.006 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:09.266 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:09.266 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:09.266 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.266 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:09.266 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.266 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:30:09.266 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:30:09.266 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:09.542 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:09.542 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:09.542 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:09.542 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:09.542 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:09.542 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:09.542 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:09.542 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:30:09.542 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:30:09.542 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:30:09.542 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:30:09.543 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:30:09.543 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 189010 00:30:09.543 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 189010 ']' 00:30:09.543 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 189010 00:30:09.543 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:30:09.543 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:09.543 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 189010 00:30:09.543 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:09.543 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:09.543 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 189010' 00:30:09.543 killing process with pid 189010 00:30:09.543 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 189010 00:30:09.543 [2024-05-15 20:22:01.920512] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:09.543 20:22:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 189010 00:30:09.543 [2024-05-15 20:22:01.921967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.922438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180db0 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.923565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.543 [2024-05-15 20:22:01.923579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.923996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.924007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.924017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.924029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.924039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.924050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.924061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e870 is same with the state(5) to be set 00:30:09.544 [2024-05-15 20:22:01.924956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.544 [2024-05-15 20:22:01.924990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.544 [2024-05-15 20:22:01.925000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.544 [2024-05-15 20:22:01.925008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.544 [2024-05-15 20:22:01.925016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.544 [2024-05-15 20:22:01.925023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.545 [2024-05-15 20:22:01.925038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29faa30 is same with the state(5) to be set 00:30:09.545 [2024-05-15 20:22:01.925110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.545 [2024-05-15 20:22:01.925119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.545 [2024-05-15 20:22:01.925134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.545 [2024-05-15 20:22:01.925153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.545 [2024-05-15 20:22:01.925169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2833db0 is same with the state(5) to be set 00:30:09.545 [2024-05-15 20:22:01.925221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.545 [2024-05-15 20:22:01.925662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.545 [2024-05-15 20:22:01.925671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ed10 is same with [2024-05-15 20:22:01.925736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1the state(5) to be set 00:30:09.546 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.925984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.925991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.926000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.926007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.926017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.926025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.926035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.926041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.926051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.926058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.926067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.926074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.926082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.926089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.926099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.926106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.926115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.926122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.926131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.926138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.926147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.926155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.926164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.926171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.926180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.926187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.926196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.546 [2024-05-15 20:22:01.926203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.546 [2024-05-15 20:22:01.926212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.547 [2024-05-15 20:22:01.926219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.547 [2024-05-15 20:22:01.926229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.547 [2024-05-15 20:22:01.926236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.547 [2024-05-15 20:22:01.926245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.547 [2024-05-15 20:22:01.926252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.547 [2024-05-15 20:22:01.926262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.547 [2024-05-15 20:22:01.926269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.547 [2024-05-15 20:22:01.926278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.547 [2024-05-15 20:22:01.926284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.547 [2024-05-15 20:22:01.926293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x297b9b0 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926338] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x297b9b0 was disconnected and freed. reset controller. 00:30:09.547 [2024-05-15 20:22:01.926412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f1b0 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.926999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.927003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.927007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.927012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.927016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.547 [2024-05-15 20:22:01.927023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f670 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.927996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.548 [2024-05-15 20:22:01.928250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fb10 is same with the state(5) to be set 00:30:09.549 [2024-05-15 20:22:01.928864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.928887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.928900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.928909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.928920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.928929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.928939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.928947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.928958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.928967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.928977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.928986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.928997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.549 [2024-05-15 20:22:01.929333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.549 [2024-05-15 20:22:01.929342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with [2024-05-15 20:22:01.929366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:30:09.550 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with [2024-05-15 20:22:01.929378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128the state(5) to be set 00:30:09.550 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with [2024-05-15 20:22:01.929415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128the state(5) to be set 00:30:09.550 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with [2024-05-15 20:22:01.929453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128the state(5) to be set 00:30:09.550 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with [2024-05-15 20:22:01.929525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:12the state(5) to be set 00:30:09.550 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 20:22:01.929554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12[2024-05-15 20:22:01.929583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with [2024-05-15 20:22:01.929592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:30:09.550 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with [2024-05-15 20:22:01.929621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:12the state(5) to be set 00:30:09.550 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.550 [2024-05-15 20:22:01.929644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.550 [2024-05-15 20:22:01.929649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.550 [2024-05-15 20:22:01.929651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:12[2024-05-15 20:22:01.929679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with [2024-05-15 20:22:01.929688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:30:09.551 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:12[2024-05-15 20:22:01.929718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 20:22:01.929728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:12[2024-05-15 20:22:01.929757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 20:22:01.929766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with [2024-05-15 20:22:01.929777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:12the state(5) to be set 00:30:09.551 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ffb0 is same with the state(5) to be set 00:30:09.551 [2024-05-15 20:22:01.929816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.929986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.929995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.930002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.930011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.930018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.930064] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x291fa40 was disconnected and freed. reset controller. 00:30:09.551 [2024-05-15 20:22:01.930088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.930096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.930106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.551 [2024-05-15 20:22:01.930113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.551 [2024-05-15 20:22:01.930123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.552 [2024-05-15 20:22:01.930130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.552 [2024-05-15 20:22:01.930139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.552 [2024-05-15 20:22:01.930146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.552 [2024-05-15 20:22:01.930155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.552 [2024-05-15 20:22:01.930162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.552 [2024-05-15 20:22:01.930172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.552 [2024-05-15 20:22:01.930178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.552 [2024-05-15 20:22:01.930187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.552 [2024-05-15 20:22:01.930194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.552 [2024-05-15 20:22:01.930204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.552 [2024-05-15 20:22:01.930213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.552 [2024-05-15 20:22:01.930222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.552 [2024-05-15 20:22:01.930229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.552 [2024-05-15 20:22:01.930238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.552 [2024-05-15 20:22:01.930245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.552 [2024-05-15 20:22:01.930254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.552 [2024-05-15 20:22:01.930261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.552 [2024-05-15 20:22:01.930270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.552 [2024-05-15 20:22:01.930277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.552 [2024-05-15 20:22:01.930286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.552 [2024-05-15 20:22:01.930293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.552 [2024-05-15 20:22:01.930302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.552 [2024-05-15 20:22:01.930309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.552 [2024-05-15 20:22:01.930323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.552 [2024-05-15 20:22:01.930330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.552 [2024-05-15 20:22:01.930541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180450 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180450 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.930996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.552 [2024-05-15 20:22:01.931121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.931204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21808f0 is same with the state(5) to be set 00:30:09.553 [2024-05-15 20:22:01.946420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.553 [2024-05-15 20:22:01.946741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.553 [2024-05-15 20:22:01.946748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.946765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.946781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.946797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.946814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.946830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.946846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.946862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.946879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.946895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.946911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.946927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.946943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.946959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.946975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.946990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.946999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947346] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2920fd0 was disconnected and freed. reset controller. 00:30:09.554 [2024-05-15 20:22:01.947863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.554 [2024-05-15 20:22:01.947919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.554 [2024-05-15 20:22:01.947926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.947936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.947943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.947952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.947959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.947968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.947975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.947984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.947991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.555 [2024-05-15 20:22:01.948500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.555 [2024-05-15 20:22:01.948509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.556 [2024-05-15 20:22:01.948919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.948947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.556 [2024-05-15 20:22:01.948991] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x282ffe0 was disconnected and freed. reset controller. 00:30:09.556 [2024-05-15 20:22:01.949093] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.556 [2024-05-15 20:22:01.949127] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2833db0 (9): Bad file descriptor 00:30:09.556 [2024-05-15 20:22:01.949175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.556 [2024-05-15 20:22:01.949185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.949193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.556 [2024-05-15 20:22:01.949201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.949209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.556 [2024-05-15 20:22:01.949216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.949224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.556 [2024-05-15 20:22:01.949231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.949238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29fe390 is same with the state(5) to be set 00:30:09.556 [2024-05-15 20:22:01.949262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.556 [2024-05-15 20:22:01.949270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.949278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.556 [2024-05-15 20:22:01.949285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.556 [2024-05-15 20:22:01.949293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.949300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.949307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.949320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.949327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2857250 is same with the state(5) to be set 00:30:09.557 [2024-05-15 20:22:01.949339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29faa30 (9): Bad file descriptor 00:30:09.557 [2024-05-15 20:22:01.949366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.949374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.949385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.949392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.949400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.949407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.949415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.949422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.949428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28799b0 is same with the state(5) to be set 00:30:09.557 [2024-05-15 20:22:01.949453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.949461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.949469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.949476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.949484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.949491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.949498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.949506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.949513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x284eab0 is same with the state(5) to be set 00:30:09.557 [2024-05-15 20:22:01.949533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.949541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.949549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.949556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.949564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.949570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.949578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.949585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.949592] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2857790 is same with the state(5) to be set 00:30:09.557 [2024-05-15 20:22:01.949618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.956306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.956362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.956372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.956383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.956391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.956401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.956410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.956419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a03140 is same with the state(5) to be set 00:30:09.557 [2024-05-15 20:22:01.956493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.956506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.956516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.956525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.956535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.956544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.956554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.956562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.956571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2862960 is same with the state(5) to be set 00:30:09.557 [2024-05-15 20:22:01.956599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.956609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.956619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.956628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.956638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.956647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.956657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:09.557 [2024-05-15 20:22:01.956665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.557 [2024-05-15 20:22:01.956680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29fad30 is same with the state(5) to be set 00:30:09.557 [2024-05-15 20:22:01.961568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:09.557 [2024-05-15 20:22:01.961601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:09.557 [2024-05-15 20:22:01.961622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x284eab0 (9): Bad file descriptor 00:30:09.557 [2024-05-15 20:22:01.961636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2862960 (9): Bad file descriptor 00:30:09.557 [2024-05-15 20:22:01.961683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29fe390 (9): Bad file descriptor 00:30:09.557 [2024-05-15 20:22:01.961708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2857250 (9): Bad file descriptor 00:30:09.557 [2024-05-15 20:22:01.961732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28799b0 (9): Bad file descriptor 00:30:09.557 [2024-05-15 20:22:01.961755] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2857790 (9): Bad file descriptor 00:30:09.557 [2024-05-15 20:22:01.961775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a03140 (9): Bad file descriptor 00:30:09.558 [2024-05-15 20:22:01.961796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29fad30 (9): Bad file descriptor 00:30:09.558 [2024-05-15 20:22:01.962531] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:09.558 [2024-05-15 20:22:01.962565] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:09.558 [2024-05-15 20:22:01.963040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-05-15 20:22:01.963557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.558 [2024-05-15 20:22:01.963611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2833db0 with addr=10.0.0.2, port=4420 00:30:09.558 [2024-05-15 20:22:01.963628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2833db0 is same with the state(5) to be set 00:30:09.558 [2024-05-15 20:22:01.963759] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:09.558 [2024-05-15 20:22:01.965270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.558 [2024-05-15 20:22:01.965845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.558 [2024-05-15 20:22:01.965859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.965871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.965885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.965896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.965911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.965922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.965936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.965947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.965961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.965972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.965987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.965998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.559 [2024-05-15 20:22:01.966730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.559 [2024-05-15 20:22:01.966746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.966757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.966772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.966783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.966797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.966808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.966822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.966834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.966848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.966859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.966873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.966885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.966899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.966910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.966924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.966936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.966950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.966962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.966975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29744e0 is same with the state(5) to be set 00:30:09.560 [2024-05-15 20:22:01.971338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:09.560 [2024-05-15 20:22:01.971814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-05-15 20:22:01.972228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-05-15 20:22:01.972245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2862960 with addr=10.0.0.2, port=4420 00:30:09.560 [2024-05-15 20:22:01.972258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2862960 is same with the state(5) to be set 00:30:09.560 [2024-05-15 20:22:01.972499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-05-15 20:22:01.972883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-05-15 20:22:01.972898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x284eab0 with addr=10.0.0.2, port=4420 00:30:09.560 [2024-05-15 20:22:01.972916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x284eab0 is same with the state(5) to be set 00:30:09.560 [2024-05-15 20:22:01.973347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-05-15 20:22:01.973739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.560 [2024-05-15 20:22:01.973754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2857250 with addr=10.0.0.2, port=4420 00:30:09.560 [2024-05-15 20:22:01.973765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2857250 is same with the state(5) to be set 00:30:09.560 [2024-05-15 20:22:01.973783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2833db0 (9): Bad file descriptor 00:30:09.560 [2024-05-15 20:22:01.973941] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:09.560 [2024-05-15 20:22:01.974006] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:09.560 [2024-05-15 20:22:01.974058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.560 [2024-05-15 20:22:01.974580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.560 [2024-05-15 20:22:01.974591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.974617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.974643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.974672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.974698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.974724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.974749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.974776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.974802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.974827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.974853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.974879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.974905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.974931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.974956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.974984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.974999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.561 [2024-05-15 20:22:01.975593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.561 [2024-05-15 20:22:01.975608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.975619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.975633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.975647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.975662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.975673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.975688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.975699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.975714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.975725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.975739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.975751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.975765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282eae0 is same with the state(5) to be set 00:30:09.562 [2024-05-15 20:22:01.975819] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x282eae0 was disconnected and freed. reset controller. 00:30:09.562 [2024-05-15 20:22:01.975892] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:09.562 [2024-05-15 20:22:01.976304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-05-15 20:22:01.976622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.562 [2024-05-15 20:22:01.976637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29faa30 with addr=10.0.0.2, port=4420 00:30:09.562 [2024-05-15 20:22:01.976649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29faa30 is same with the state(5) to be set 00:30:09.562 [2024-05-15 20:22:01.976665] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2862960 (9): Bad file descriptor 00:30:09.562 [2024-05-15 20:22:01.976680] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x284eab0 (9): Bad file descriptor 00:30:09.562 [2024-05-15 20:22:01.976694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2857250 (9): Bad file descriptor 00:30:09.562 [2024-05-15 20:22:01.976707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.562 [2024-05-15 20:22:01.976717] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.562 [2024-05-15 20:22:01.976730] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.562 [2024-05-15 20:22:01.976768] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:09.562 [2024-05-15 20:22:01.976811] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:09.562 [2024-05-15 20:22:01.976827] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:09.562 [2024-05-15 20:22:01.976842] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:09.562 [2024-05-15 20:22:01.976858] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:09.562 [2024-05-15 20:22:01.979301] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.562 [2024-05-15 20:22:01.979350] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:09.562 [2024-05-15 20:22:01.979384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29faa30 (9): Bad file descriptor 00:30:09.562 [2024-05-15 20:22:01.979398] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:09.562 [2024-05-15 20:22:01.979408] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:09.562 [2024-05-15 20:22:01.979419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:09.562 [2024-05-15 20:22:01.979436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:09.562 [2024-05-15 20:22:01.979446] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:09.562 [2024-05-15 20:22:01.979457] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:09.562 [2024-05-15 20:22:01.979474] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:09.562 [2024-05-15 20:22:01.979484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:09.562 [2024-05-15 20:22:01.979495] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:09.562 [2024-05-15 20:22:01.979549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.979564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.979582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.979593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.979609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.979620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.979634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.979646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.979661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.979672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.979687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.979698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.979713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.979724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.979739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.979750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.979765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.979780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.979795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.979806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.979822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.979834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.979849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.979860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.979875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.979886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.562 [2024-05-15 20:22:01.979901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.562 [2024-05-15 20:22:01.979914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.979930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.979943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.979958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.979969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.979984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.979995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.563 [2024-05-15 20:22:01.980679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.563 [2024-05-15 20:22:01.980690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.980705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.980716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.980731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.980742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.980757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.980768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.980785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.980796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.980810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.980822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.980836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.980848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.980862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.980873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.980887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.980898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.980912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.980924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.980938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.980949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.980964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.980975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.980989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.981000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.981015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.981026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.981040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.981052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.981066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.981077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.981092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.981105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.981120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.981131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.981145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.981157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.981173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.981184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.981198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.981210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.981224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.981235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.981248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x291e600 is same with the state(5) to be set 00:30:09.564 [2024-05-15 20:22:01.983254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.983274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.983290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.983302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.983322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.983334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.983349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.983360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.983374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.983386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.983400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.983411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.983426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.983442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.983456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.983468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.983483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.983494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.983509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.983520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.564 [2024-05-15 20:22:01.983535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.564 [2024-05-15 20:22:01.983546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.983975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.983986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.565 [2024-05-15 20:22:01.984426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.565 [2024-05-15 20:22:01.984438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.984930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.984942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282c0e0 is same with the state(5) to be set 00:30:09.566 [2024-05-15 20:22:01.986650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.986664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.986677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.986687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.986699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.986708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.986720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.986728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.986740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.986751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.986763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.986771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.986784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.986793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.986805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.986814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.986825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.986835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.986846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.986855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.986868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.986877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.986888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.986897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.986910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.986920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.986932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.566 [2024-05-15 20:22:01.986941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.566 [2024-05-15 20:22:01.986952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.986961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.986972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.986982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.986993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.567 [2024-05-15 20:22:01.987616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.567 [2024-05-15 20:22:01.987624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.987937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.987946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282d5e0 is same with the state(5) to be set 00:30:09.568 [2024-05-15 20:22:01.989482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.568 [2024-05-15 20:22:01.989806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.568 [2024-05-15 20:22:01.989815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.989825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.989834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.989845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.989853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.989864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.989873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.989884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.989892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.989903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.989911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.989922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.989931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.989942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.989950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.989961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.989969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.989980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.989988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.989999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.569 [2024-05-15 20:22:01.990347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.569 [2024-05-15 20:22:01.990355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.570 [2024-05-15 20:22:01.990747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:09.570 [2024-05-15 20:22:01.990757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2985100 is same with the state(5) to be set 00:30:09.570 [2024-05-15 20:22:01.992887] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.570 [2024-05-15 20:22:01.992909] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.570 [2024-05-15 20:22:01.992917] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.570 [2024-05-15 20:22:01.992930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:09.570 [2024-05-15 20:22:01.992944] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:09.570 [2024-05-15 20:22:01.993422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.570 [2024-05-15 20:22:01.993800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.570 [2024-05-15 20:22:01.993813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28799b0 with addr=10.0.0.2, port=4420 00:30:09.570 [2024-05-15 20:22:01.993822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28799b0 is same with the state(5) to be set 00:30:09.570 [2024-05-15 20:22:01.993832] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:09.570 [2024-05-15 20:22:01.993839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:09.570 [2024-05-15 20:22:01.993847] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:09.570 [2024-05-15 20:22:01.993876] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:09.570 [2024-05-15 20:22:01.993896] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:09.570 [2024-05-15 20:22:01.993908] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:09.570 [2024-05-15 20:22:01.993952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28799b0 (9): Bad file descriptor 00:30:09.570 [2024-05-15 20:22:01.994336] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:09.570 task offset: 24576 on job bdev=Nvme1n1 fails 00:30:09.570 00:30:09.570 Latency(us) 00:30:09.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.570 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:09.570 Job: Nvme1n1 ended in about 0.93 seconds with error 00:30:09.570 Verification LBA range: start 0x0 length 0x400 00:30:09.570 Nvme1n1 : 0.93 207.27 12.95 69.09 0.00 228895.20 4341.76 312825.17 00:30:09.570 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:09.570 Job: Nvme2n1 ended in about 0.98 seconds with error 00:30:09.570 Verification LBA range: start 0x0 length 0x400 00:30:09.570 Nvme2n1 : 0.98 130.60 8.16 65.30 0.00 316809.67 19333.12 263891.63 00:30:09.570 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:09.570 Job: Nvme3n1 ended in about 0.96 seconds with error 00:30:09.570 Verification LBA range: start 0x0 length 0x400 00:30:09.570 Nvme3n1 : 0.96 200.93 12.56 66.98 0.00 226752.85 21845.33 225443.84 00:30:09.570 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:09.570 Job: Nvme4n1 ended in about 0.96 seconds with error 00:30:09.570 Verification LBA range: start 0x0 length 0x400 00:30:09.570 Nvme4n1 : 0.96 200.61 12.54 66.87 0.00 222316.59 23265.28 201850.88 00:30:09.570 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:09.570 Job: Nvme5n1 ended in about 0.98 seconds with error 00:30:09.570 Verification LBA range: start 0x0 length 0x400 00:30:09.570 Nvme5n1 : 0.98 130.11 8.13 65.06 0.00 298963.06 26105.17 288358.40 00:30:09.570 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:09.570 Job: Nvme6n1 ended in about 0.99 seconds with error 00:30:09.570 Verification LBA range: start 0x0 length 0x400 00:30:09.570 Nvme6n1 : 0.99 194.60 12.16 64.87 0.00 220269.87 21080.75 235929.60 00:30:09.570 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:09.570 Job: Nvme7n1 ended in about 0.98 seconds with error 00:30:09.570 Verification LBA range: start 0x0 length 0x400 00:30:09.570 Nvme7n1 : 0.98 196.77 12.30 65.59 0.00 212803.63 23265.28 225443.84 00:30:09.571 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:09.571 Job: Nvme8n1 ended in about 0.96 seconds with error 00:30:09.571 Verification LBA range: start 0x0 length 0x400 00:30:09.571 Nvme8n1 : 0.96 200.27 12.52 66.76 0.00 203745.60 12506.45 235929.60 00:30:09.571 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:09.571 Job: Nvme9n1 ended in about 0.99 seconds with error 00:30:09.571 Verification LBA range: start 0x0 length 0x400 00:30:09.571 Nvme9n1 : 0.99 134.42 8.40 64.68 0.00 268736.13 18240.85 253405.87 00:30:09.571 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:09.571 Job: Nvme10n1 ended in about 0.97 seconds with error 00:30:09.571 Verification LBA range: start 0x0 length 0x400 00:30:09.571 Nvme10n1 : 0.97 132.53 8.28 66.26 0.00 261460.48 23156.05 281367.89 00:30:09.571 =================================================================================================================== 00:30:09.571 Total : 1728.12 108.01 661.46 0.00 241643.32 4341.76 312825.17 00:30:09.571 [2024-05-15 20:22:02.021358] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:09.571 [2024-05-15 20:22:02.021387] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:09.571 [2024-05-15 20:22:02.021399] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.571 [2024-05-15 20:22:02.021869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.571 [2024-05-15 20:22:02.022271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.571 [2024-05-15 20:22:02.022281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29fe390 with addr=10.0.0.2, port=4420 00:30:09.571 [2024-05-15 20:22:02.022290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29fe390 is same with the state(5) to be set 00:30:09.571 [2024-05-15 20:22:02.022657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.571 [2024-05-15 20:22:02.023054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.571 [2024-05-15 20:22:02.023063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2857790 with addr=10.0.0.2, port=4420 00:30:09.571 [2024-05-15 20:22:02.023070] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2857790 is same with the state(5) to be set 00:30:09.571 [2024-05-15 20:22:02.024152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:09.571 [2024-05-15 20:22:02.024166] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:09.571 [2024-05-15 20:22:02.024175] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:09.571 [2024-05-15 20:22:02.024184] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:09.571 [2024-05-15 20:22:02.024610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.571 [2024-05-15 20:22:02.025008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.571 [2024-05-15 20:22:02.025017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a03140 with addr=10.0.0.2, port=4420 00:30:09.571 [2024-05-15 20:22:02.025024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a03140 is same with the state(5) to be set 00:30:09.571 [2024-05-15 20:22:02.025458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.571 [2024-05-15 20:22:02.025706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.571 [2024-05-15 20:22:02.025715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29fad30 with addr=10.0.0.2, port=4420 00:30:09.571 [2024-05-15 20:22:02.025723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29fad30 is same with the state(5) to be set 00:30:09.571 [2024-05-15 20:22:02.025738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29fe390 (9): Bad file descriptor 00:30:09.571 [2024-05-15 20:22:02.025749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2857790 (9): Bad file descriptor 00:30:09.571 [2024-05-15 20:22:02.025757] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:09.571 [2024-05-15 20:22:02.025764] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:09.571 [2024-05-15 20:22:02.025771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:09.571 [2024-05-15 20:22:02.025811] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:09.571 [2024-05-15 20:22:02.025822] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:09.571 [2024-05-15 20:22:02.025833] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:09.571 [2024-05-15 20:22:02.026127] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.571 [2024-05-15 20:22:02.026438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.571 [2024-05-15 20:22:02.026669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.571 [2024-05-15 20:22:02.026680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2833db0 with addr=10.0.0.2, port=4420 00:30:09.571 [2024-05-15 20:22:02.026687] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2833db0 is same with the state(5) to be set 00:30:09.571 [2024-05-15 20:22:02.027063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.571 [2024-05-15 20:22:02.027459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.571 [2024-05-15 20:22:02.027468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2857250 with addr=10.0.0.2, port=4420 00:30:09.571 [2024-05-15 20:22:02.027475] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2857250 is same with the state(5) to be set 00:30:09.571 [2024-05-15 20:22:02.027880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-05-15 20:22:02.028273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-05-15 20:22:02.028283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x284eab0 with addr=10.0.0.2, port=4420 00:30:09.832 [2024-05-15 20:22:02.028291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x284eab0 is same with the state(5) to be set 00:30:09.832 [2024-05-15 20:22:02.028571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-05-15 20:22:02.028950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.832 [2024-05-15 20:22:02.028960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2862960 with addr=10.0.0.2, port=4420 00:30:09.832 [2024-05-15 20:22:02.028967] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2862960 is same with the state(5) to be set 00:30:09.832 [2024-05-15 20:22:02.028976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a03140 (9): Bad file descriptor 00:30:09.832 [2024-05-15 20:22:02.028985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29fad30 (9): Bad file descriptor 00:30:09.832 [2024-05-15 20:22:02.028993] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:09.832 [2024-05-15 20:22:02.028999] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:09.832 [2024-05-15 20:22:02.029006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:09.832 [2024-05-15 20:22:02.029016] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:09.832 [2024-05-15 20:22:02.029025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:09.832 [2024-05-15 20:22:02.029032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:09.832 [2024-05-15 20:22:02.029091] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:09.832 [2024-05-15 20:22:02.029102] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.832 [2024-05-15 20:22:02.029108] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.832 [2024-05-15 20:22:02.029121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2833db0 (9): Bad file descriptor 00:30:09.832 [2024-05-15 20:22:02.029130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2857250 (9): Bad file descriptor 00:30:09.832 [2024-05-15 20:22:02.029139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x284eab0 (9): Bad file descriptor 00:30:09.832 [2024-05-15 20:22:02.029148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2862960 (9): Bad file descriptor 00:30:09.832 [2024-05-15 20:22:02.029156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:09.832 [2024-05-15 20:22:02.029162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:09.832 [2024-05-15 20:22:02.029169] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:09.832 [2024-05-15 20:22:02.029177] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:09.832 [2024-05-15 20:22:02.029184] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:09.832 [2024-05-15 20:22:02.029190] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:09.832 [2024-05-15 20:22:02.029227] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.832 [2024-05-15 20:22:02.029235] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.832 [2024-05-15 20:22:02.029540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-05-15 20:22:02.029721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.833 [2024-05-15 20:22:02.029730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29faa30 with addr=10.0.0.2, port=4420 00:30:09.833 [2024-05-15 20:22:02.029738] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29faa30 is same with the state(5) to be set 00:30:09.833 [2024-05-15 20:22:02.029745] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:09.833 [2024-05-15 20:22:02.029751] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:09.833 [2024-05-15 20:22:02.029757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:09.833 [2024-05-15 20:22:02.029767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:09.833 [2024-05-15 20:22:02.029773] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:09.833 [2024-05-15 20:22:02.029780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:09.833 [2024-05-15 20:22:02.029789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:09.833 [2024-05-15 20:22:02.029795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:09.833 [2024-05-15 20:22:02.029801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:09.833 [2024-05-15 20:22:02.029813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:09.833 [2024-05-15 20:22:02.029820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:09.833 [2024-05-15 20:22:02.029826] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:09.833 [2024-05-15 20:22:02.029858] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.833 [2024-05-15 20:22:02.029864] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.833 [2024-05-15 20:22:02.029870] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.833 [2024-05-15 20:22:02.029876] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.833 [2024-05-15 20:22:02.029884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29faa30 (9): Bad file descriptor 00:30:09.833 [2024-05-15 20:22:02.029910] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:09.833 [2024-05-15 20:22:02.029917] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:09.833 [2024-05-15 20:22:02.029924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:09.833 [2024-05-15 20:22:02.029952] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:09.833 20:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:30:09.833 20:22:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:30:10.776 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 189268 00:30:10.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (189268) - No such process 00:30:10.776 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:30:10.776 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:30:10.776 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:10.776 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:10.776 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:10.776 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:10.776 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:10.776 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:30:10.776 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:10.776 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:30:10.776 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:10.776 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:10.776 rmmod nvme_tcp 00:30:10.776 rmmod nvme_fabrics 00:30:10.776 rmmod nvme_keyring 00:30:11.037 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:11.037 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:30:11.037 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:30:11.037 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:30:11.037 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:11.037 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:11.037 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:11.037 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:11.037 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:11.037 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.037 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:11.037 20:22:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.952 20:22:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:12.952 00:30:12.952 real 0m7.958s 00:30:12.952 user 0m19.628s 00:30:12.952 sys 0m1.279s 00:30:12.952 20:22:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:12.952 20:22:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:12.952 ************************************ 00:30:12.952 END TEST nvmf_shutdown_tc3 00:30:12.952 ************************************ 00:30:12.952 20:22:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:30:12.952 00:30:12.952 real 0m34.271s 00:30:12.952 user 1m19.942s 00:30:12.952 sys 0m10.224s 00:30:12.952 20:22:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:12.952 20:22:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:12.952 ************************************ 00:30:12.952 END TEST nvmf_shutdown 00:30:12.952 ************************************ 00:30:13.213 20:22:05 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:30:13.213 20:22:05 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:13.213 20:22:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:13.213 20:22:05 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:30:13.213 20:22:05 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:13.213 20:22:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:13.213 20:22:05 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:30:13.213 20:22:05 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:13.213 20:22:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:13.213 20:22:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:13.213 20:22:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:13.213 ************************************ 00:30:13.213 START TEST nvmf_multicontroller 00:30:13.213 ************************************ 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:13.213 * Looking for test storage... 00:30:13.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.213 20:22:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:30:13.214 20:22:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.355 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:21.355 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:30:21.355 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:21.355 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:21.355 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:21.355 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:21.355 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:21.356 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:21.356 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:21.356 Found net devices under 0000:31:00.0: cvl_0_0 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:21.356 Found net devices under 0000:31:00.1: cvl_0_1 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:21.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:21.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:30:21.356 00:30:21.356 --- 10.0.0.2 ping statistics --- 00:30:21.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.356 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:30:21.356 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:21.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:21.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:30:21.356 00:30:21.356 --- 10.0.0.1 ping statistics --- 00:30:21.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.356 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=194700 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 194700 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 194700 ']' 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:21.618 20:22:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:21.618 [2024-05-15 20:22:13.954824] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:30:21.618 [2024-05-15 20:22:13.954884] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.618 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.618 [2024-05-15 20:22:14.030672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:21.618 [2024-05-15 20:22:14.103730] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.618 [2024-05-15 20:22:14.103769] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.618 [2024-05-15 20:22:14.103776] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.618 [2024-05-15 20:22:14.103782] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.618 [2024-05-15 20:22:14.103788] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.618 [2024-05-15 20:22:14.103890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:21.618 [2024-05-15 20:22:14.104046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.618 [2024-05-15 20:22:14.104047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:22.560 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:22.560 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:30:22.560 20:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:22.560 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:22.560 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.560 20:22:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.560 20:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:22.560 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.560 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.560 [2024-05-15 20:22:14.863771] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.560 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.561 Malloc0 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.561 [2024-05-15 20:22:14.935448] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:22.561 [2024-05-15 20:22:14.935656] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.561 [2024-05-15 20:22:14.947602] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.561 Malloc1 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.561 20:22:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.561 20:22:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.561 20:22:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:22.561 20:22:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.561 20:22:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:22.561 20:22:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.561 20:22:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=194843 00:30:22.561 20:22:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:22.561 20:22:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:22.561 20:22:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 194843 /var/tmp/bdevperf.sock 00:30:22.561 20:22:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 194843 ']' 00:30:22.561 20:22:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:22.561 20:22:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:22.561 20:22:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:22.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:22.561 20:22:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:22.561 20:22:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.504 20:22:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:23.504 20:22:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:30:23.504 20:22:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:23.504 20:22:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.504 20:22:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.765 NVMe0n1 00:30:23.765 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.765 20:22:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:23.765 20:22:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:23.765 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.765 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.765 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.765 1 00:30:23.765 20:22:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:23.765 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:23.765 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:23.765 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:23.765 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:23.765 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:23.765 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:23.765 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:23.765 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.766 request: 00:30:23.766 { 00:30:23.766 "name": "NVMe0", 00:30:23.766 "trtype": "tcp", 00:30:23.766 "traddr": "10.0.0.2", 00:30:23.766 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:23.766 "hostaddr": "10.0.0.2", 00:30:23.766 "hostsvcid": "60000", 00:30:23.766 "adrfam": "ipv4", 00:30:23.766 "trsvcid": "4420", 00:30:23.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:23.766 "method": "bdev_nvme_attach_controller", 00:30:23.766 "req_id": 1 00:30:23.766 } 00:30:23.766 Got JSON-RPC error response 00:30:23.766 response: 00:30:23.766 { 00:30:23.766 "code": -114, 00:30:23.766 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:23.766 } 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.766 request: 00:30:23.766 { 00:30:23.766 "name": "NVMe0", 00:30:23.766 "trtype": "tcp", 00:30:23.766 "traddr": "10.0.0.2", 00:30:23.766 "hostaddr": "10.0.0.2", 00:30:23.766 "hostsvcid": "60000", 00:30:23.766 "adrfam": "ipv4", 00:30:23.766 "trsvcid": "4420", 00:30:23.766 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:23.766 "method": "bdev_nvme_attach_controller", 00:30:23.766 "req_id": 1 00:30:23.766 } 00:30:23.766 Got JSON-RPC error response 00:30:23.766 response: 00:30:23.766 { 00:30:23.766 "code": -114, 00:30:23.766 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:23.766 } 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.766 request: 00:30:23.766 { 00:30:23.766 "name": "NVMe0", 00:30:23.766 "trtype": "tcp", 00:30:23.766 "traddr": "10.0.0.2", 00:30:23.766 "hostaddr": "10.0.0.2", 00:30:23.766 "hostsvcid": "60000", 00:30:23.766 "adrfam": "ipv4", 00:30:23.766 "trsvcid": "4420", 00:30:23.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:23.766 "multipath": "disable", 00:30:23.766 "method": "bdev_nvme_attach_controller", 00:30:23.766 "req_id": 1 00:30:23.766 } 00:30:23.766 Got JSON-RPC error response 00:30:23.766 response: 00:30:23.766 { 00:30:23.766 "code": -114, 00:30:23.766 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:30:23.766 } 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:23.766 request: 00:30:23.766 { 00:30:23.766 "name": "NVMe0", 00:30:23.766 "trtype": "tcp", 00:30:23.766 "traddr": "10.0.0.2", 00:30:23.766 "hostaddr": "10.0.0.2", 00:30:23.766 "hostsvcid": "60000", 00:30:23.766 "adrfam": "ipv4", 00:30:23.766 "trsvcid": "4420", 00:30:23.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:23.766 "multipath": "failover", 00:30:23.766 "method": "bdev_nvme_attach_controller", 00:30:23.766 "req_id": 1 00:30:23.766 } 00:30:23.766 Got JSON-RPC error response 00:30:23.766 response: 00:30:23.766 { 00:30:23.766 "code": -114, 00:30:23.766 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:23.766 } 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.766 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:24.027 00:30:24.027 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.027 20:22:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:24.027 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.027 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:24.027 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.027 20:22:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:24.027 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.027 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:24.288 00:30:24.288 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.288 20:22:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:24.288 20:22:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:24.288 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.288 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:24.288 20:22:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.288 20:22:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:24.288 20:22:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:25.230 0 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 194843 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 194843 ']' 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 194843 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 194843 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 194843' 00:30:25.495 killing process with pid 194843 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 194843 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 194843 00:30:25.495 20:22:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:30:25.496 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:25.496 [2024-05-15 20:22:15.065199] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:30:25.496 [2024-05-15 20:22:15.065259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid194843 ] 00:30:25.496 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.496 [2024-05-15 20:22:15.146864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.496 [2024-05-15 20:22:15.211169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.496 [2024-05-15 20:22:16.589359] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 255774c1-7bad-46a9-9277-77625bc87f94 already exists 00:30:25.496 [2024-05-15 20:22:16.589389] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:255774c1-7bad-46a9-9277-77625bc87f94 alias for bdev NVMe1n1 00:30:25.496 [2024-05-15 20:22:16.589399] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:25.496 Running I/O for 1 seconds... 00:30:25.496 00:30:25.496 Latency(us) 00:30:25.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.496 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:25.496 NVMe0n1 : 1.01 20384.15 79.63 0.00 0.00 6263.10 4314.45 13271.04 00:30:25.496 =================================================================================================================== 00:30:25.496 Total : 20384.15 79.63 0.00 0.00 6263.10 4314.45 13271.04 00:30:25.496 Received shutdown signal, test time was about 1.000000 seconds 00:30:25.496 00:30:25.496 Latency(us) 00:30:25.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.496 =================================================================================================================== 00:30:25.496 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:25.496 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:25.496 20:22:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:25.800 rmmod nvme_tcp 00:30:25.800 rmmod nvme_fabrics 00:30:25.800 rmmod nvme_keyring 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 194700 ']' 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 194700 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 194700 ']' 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 194700 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 194700 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 194700' 00:30:25.800 killing process with pid 194700 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 194700 00:30:25.800 [2024-05-15 20:22:18.141043] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 194700 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.800 20:22:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:26.077 20:22:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.991 20:22:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:27.991 00:30:27.991 real 0m14.807s 00:30:27.991 user 0m18.236s 00:30:27.991 sys 0m6.806s 00:30:27.991 20:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:27.991 20:22:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:27.991 ************************************ 00:30:27.991 END TEST nvmf_multicontroller 00:30:27.991 ************************************ 00:30:27.991 20:22:20 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:27.991 20:22:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:27.991 20:22:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:27.991 20:22:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:27.991 ************************************ 00:30:27.991 START TEST nvmf_aer 00:30:27.991 ************************************ 00:30:27.991 20:22:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:28.252 * Looking for test storage... 00:30:28.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:30:28.253 20:22:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:36.396 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.396 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:36.397 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:36.397 Found net devices under 0000:31:00.0: cvl_0_0 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:36.397 Found net devices under 0000:31:00.1: cvl_0_1 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:36.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:36.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:30:36.397 00:30:36.397 --- 10.0.0.2 ping statistics --- 00:30:36.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.397 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:36.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:36.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:30:36.397 00:30:36.397 --- 10.0.0.1 ping statistics --- 00:30:36.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.397 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=200196 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 200196 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 200196 ']' 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:36.397 20:22:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:36.397 [2024-05-15 20:22:28.848412] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:30:36.397 [2024-05-15 20:22:28.848459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.397 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.658 [2024-05-15 20:22:28.935062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:36.658 [2024-05-15 20:22:29.002271] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:36.658 [2024-05-15 20:22:29.002309] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:36.658 [2024-05-15 20:22:29.002321] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:36.658 [2024-05-15 20:22:29.002327] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:36.658 [2024-05-15 20:22:29.002333] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:36.658 [2024-05-15 20:22:29.002531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.658 [2024-05-15 20:22:29.002705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:36.658 [2024-05-15 20:22:29.002863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:36.658 [2024-05-15 20:22:29.002864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.230 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:37.230 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:30:37.230 20:22:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:37.230 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:37.230 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.494 [2024-05-15 20:22:29.772192] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.494 Malloc0 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.494 [2024-05-15 20:22:29.831301] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:37.494 [2024-05-15 20:22:29.831523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.494 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.494 [ 00:30:37.494 { 00:30:37.494 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:37.494 "subtype": "Discovery", 00:30:37.494 "listen_addresses": [], 00:30:37.494 "allow_any_host": true, 00:30:37.494 "hosts": [] 00:30:37.494 }, 00:30:37.494 { 00:30:37.495 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:37.495 "subtype": "NVMe", 00:30:37.495 "listen_addresses": [ 00:30:37.495 { 00:30:37.495 "trtype": "TCP", 00:30:37.495 "adrfam": "IPv4", 00:30:37.495 "traddr": "10.0.0.2", 00:30:37.495 "trsvcid": "4420" 00:30:37.495 } 00:30:37.495 ], 00:30:37.495 "allow_any_host": true, 00:30:37.495 "hosts": [], 00:30:37.495 "serial_number": "SPDK00000000000001", 00:30:37.495 "model_number": "SPDK bdev Controller", 00:30:37.495 "max_namespaces": 2, 00:30:37.495 "min_cntlid": 1, 00:30:37.495 "max_cntlid": 65519, 00:30:37.495 "namespaces": [ 00:30:37.495 { 00:30:37.495 "nsid": 1, 00:30:37.495 "bdev_name": "Malloc0", 00:30:37.495 "name": "Malloc0", 00:30:37.495 "nguid": "910FD1A1248B4D3AA47F12366510EA5E", 00:30:37.495 "uuid": "910fd1a1-248b-4d3a-a47f-12366510ea5e" 00:30:37.495 } 00:30:37.495 ] 00:30:37.495 } 00:30:37.495 ] 00:30:37.495 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.495 20:22:29 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:37.495 20:22:29 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:37.495 20:22:29 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=200403 00:30:37.495 20:22:29 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:37.495 20:22:29 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:37.495 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:30:37.495 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:37.495 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:30:37.495 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:30:37.495 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:30:37.495 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.495 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:37.495 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:30:37.495 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:30:37.495 20:22:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.756 Malloc1 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.756 Asynchronous Event Request test 00:30:37.756 Attaching to 10.0.0.2 00:30:37.756 Attached to 10.0.0.2 00:30:37.756 Registering asynchronous event callbacks... 00:30:37.756 Starting namespace attribute notice tests for all controllers... 00:30:37.756 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:37.756 aer_cb - Changed Namespace 00:30:37.756 Cleaning up... 00:30:37.756 [ 00:30:37.756 { 00:30:37.756 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:37.756 "subtype": "Discovery", 00:30:37.756 "listen_addresses": [], 00:30:37.756 "allow_any_host": true, 00:30:37.756 "hosts": [] 00:30:37.756 }, 00:30:37.756 { 00:30:37.756 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:37.756 "subtype": "NVMe", 00:30:37.756 "listen_addresses": [ 00:30:37.756 { 00:30:37.756 "trtype": "TCP", 00:30:37.756 "adrfam": "IPv4", 00:30:37.756 "traddr": "10.0.0.2", 00:30:37.756 "trsvcid": "4420" 00:30:37.756 } 00:30:37.756 ], 00:30:37.756 "allow_any_host": true, 00:30:37.756 "hosts": [], 00:30:37.756 "serial_number": "SPDK00000000000001", 00:30:37.756 "model_number": "SPDK bdev Controller", 00:30:37.756 "max_namespaces": 2, 00:30:37.756 "min_cntlid": 1, 00:30:37.756 "max_cntlid": 65519, 00:30:37.756 "namespaces": [ 00:30:37.756 { 00:30:37.756 "nsid": 1, 00:30:37.756 "bdev_name": "Malloc0", 00:30:37.756 "name": "Malloc0", 00:30:37.756 "nguid": "910FD1A1248B4D3AA47F12366510EA5E", 00:30:37.756 "uuid": "910fd1a1-248b-4d3a-a47f-12366510ea5e" 00:30:37.756 }, 00:30:37.756 { 00:30:37.756 "nsid": 2, 00:30:37.756 "bdev_name": "Malloc1", 00:30:37.756 "name": "Malloc1", 00:30:37.756 "nguid": "BD7CCB9ED3A54D5F8BEA60621071B49E", 00:30:37.756 "uuid": "bd7ccb9e-d3a5-4d5f-8bea-60621071b49e" 00:30:37.756 } 00:30:37.756 ] 00:30:37.756 } 00:30:37.756 ] 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 200403 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.756 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:38.017 rmmod nvme_tcp 00:30:38.017 rmmod nvme_fabrics 00:30:38.017 rmmod nvme_keyring 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 200196 ']' 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 200196 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 200196 ']' 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 200196 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 200196 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 200196' 00:30:38.017 killing process with pid 200196 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 200196 00:30:38.017 [2024-05-15 20:22:30.405441] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:38.017 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 200196 00:30:38.278 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:38.279 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:38.279 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:38.279 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:38.279 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:38.279 20:22:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.279 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:38.279 20:22:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.188 20:22:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:40.188 00:30:40.188 real 0m12.156s 00:30:40.188 user 0m8.427s 00:30:40.188 sys 0m6.675s 00:30:40.188 20:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:40.188 20:22:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:40.188 ************************************ 00:30:40.188 END TEST nvmf_aer 00:30:40.188 ************************************ 00:30:40.188 20:22:32 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:40.188 20:22:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:40.188 20:22:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:40.188 20:22:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:40.449 ************************************ 00:30:40.449 START TEST nvmf_async_init 00:30:40.449 ************************************ 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:40.449 * Looking for test storage... 00:30:40.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.449 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a4e904bceeb54985a7b2731f57d9583e 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:30:40.450 20:22:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:48.589 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:48.589 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:48.589 Found net devices under 0000:31:00.0: cvl_0_0 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:48.589 Found net devices under 0000:31:00.1: cvl_0_1 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.589 20:22:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:48.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:30:48.589 00:30:48.589 --- 10.0.0.2 ping statistics --- 00:30:48.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.589 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.463 ms 00:30:48.589 00:30:48.589 --- 10.0.0.1 ping statistics --- 00:30:48.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.589 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=204983 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 204983 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 204983 ']' 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:48.589 20:22:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:48.590 20:22:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:48.590 [2024-05-15 20:22:40.321594] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:30:48.590 [2024-05-15 20:22:40.321650] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.590 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.590 [2024-05-15 20:22:40.414895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.590 [2024-05-15 20:22:40.509111] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.590 [2024-05-15 20:22:40.509168] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.590 [2024-05-15 20:22:40.509183] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.590 [2024-05-15 20:22:40.509190] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.590 [2024-05-15 20:22:40.509195] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.590 [2024-05-15 20:22:40.509228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:48.851 [2024-05-15 20:22:41.245928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:48.851 null0 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a4e904bceeb54985a7b2731f57d9583e 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:48.851 [2024-05-15 20:22:41.285973] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:48.851 [2024-05-15 20:22:41.286166] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.851 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.112 nvme0n1 00:30:49.112 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.112 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:49.112 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.112 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.112 [ 00:30:49.112 { 00:30:49.112 "name": "nvme0n1", 00:30:49.112 "aliases": [ 00:30:49.112 "a4e904bc-eeb5-4985-a7b2-731f57d9583e" 00:30:49.112 ], 00:30:49.112 "product_name": "NVMe disk", 00:30:49.112 "block_size": 512, 00:30:49.112 "num_blocks": 2097152, 00:30:49.112 "uuid": "a4e904bc-eeb5-4985-a7b2-731f57d9583e", 00:30:49.112 "assigned_rate_limits": { 00:30:49.112 "rw_ios_per_sec": 0, 00:30:49.112 "rw_mbytes_per_sec": 0, 00:30:49.112 "r_mbytes_per_sec": 0, 00:30:49.112 "w_mbytes_per_sec": 0 00:30:49.112 }, 00:30:49.112 "claimed": false, 00:30:49.112 "zoned": false, 00:30:49.112 "supported_io_types": { 00:30:49.112 "read": true, 00:30:49.112 "write": true, 00:30:49.112 "unmap": false, 00:30:49.112 "write_zeroes": true, 00:30:49.112 "flush": true, 00:30:49.112 "reset": true, 00:30:49.112 "compare": true, 00:30:49.112 "compare_and_write": true, 00:30:49.112 "abort": true, 00:30:49.112 "nvme_admin": true, 00:30:49.112 "nvme_io": true 00:30:49.112 }, 00:30:49.112 "memory_domains": [ 00:30:49.112 { 00:30:49.112 "dma_device_id": "system", 00:30:49.112 "dma_device_type": 1 00:30:49.112 } 00:30:49.112 ], 00:30:49.112 "driver_specific": { 00:30:49.112 "nvme": [ 00:30:49.112 { 00:30:49.112 "trid": { 00:30:49.112 "trtype": "TCP", 00:30:49.112 "adrfam": "IPv4", 00:30:49.112 "traddr": "10.0.0.2", 00:30:49.112 "trsvcid": "4420", 00:30:49.112 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:49.112 }, 00:30:49.112 "ctrlr_data": { 00:30:49.112 "cntlid": 1, 00:30:49.112 "vendor_id": "0x8086", 00:30:49.112 "model_number": "SPDK bdev Controller", 00:30:49.112 "serial_number": "00000000000000000000", 00:30:49.112 "firmware_revision": "24.05", 00:30:49.112 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:49.112 "oacs": { 00:30:49.112 "security": 0, 00:30:49.112 "format": 0, 00:30:49.112 "firmware": 0, 00:30:49.112 "ns_manage": 0 00:30:49.112 }, 00:30:49.112 "multi_ctrlr": true, 00:30:49.112 "ana_reporting": false 00:30:49.112 }, 00:30:49.112 "vs": { 00:30:49.112 "nvme_version": "1.3" 00:30:49.112 }, 00:30:49.112 "ns_data": { 00:30:49.112 "id": 1, 00:30:49.112 "can_share": true 00:30:49.112 } 00:30:49.112 } 00:30:49.112 ], 00:30:49.112 "mp_policy": "active_passive" 00:30:49.112 } 00:30:49.112 } 00:30:49.112 ] 00:30:49.112 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.112 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:49.112 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.112 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.112 [2024-05-15 20:22:41.542668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:49.112 [2024-05-15 20:22:41.542728] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f93ca0 (9): Bad file descriptor 00:30:49.373 [2024-05-15 20:22:41.674407] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:49.373 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.373 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:49.373 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.373 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.373 [ 00:30:49.373 { 00:30:49.373 "name": "nvme0n1", 00:30:49.373 "aliases": [ 00:30:49.373 "a4e904bc-eeb5-4985-a7b2-731f57d9583e" 00:30:49.373 ], 00:30:49.373 "product_name": "NVMe disk", 00:30:49.373 "block_size": 512, 00:30:49.373 "num_blocks": 2097152, 00:30:49.373 "uuid": "a4e904bc-eeb5-4985-a7b2-731f57d9583e", 00:30:49.373 "assigned_rate_limits": { 00:30:49.373 "rw_ios_per_sec": 0, 00:30:49.373 "rw_mbytes_per_sec": 0, 00:30:49.373 "r_mbytes_per_sec": 0, 00:30:49.373 "w_mbytes_per_sec": 0 00:30:49.373 }, 00:30:49.373 "claimed": false, 00:30:49.373 "zoned": false, 00:30:49.373 "supported_io_types": { 00:30:49.373 "read": true, 00:30:49.373 "write": true, 00:30:49.373 "unmap": false, 00:30:49.373 "write_zeroes": true, 00:30:49.373 "flush": true, 00:30:49.373 "reset": true, 00:30:49.373 "compare": true, 00:30:49.373 "compare_and_write": true, 00:30:49.373 "abort": true, 00:30:49.373 "nvme_admin": true, 00:30:49.373 "nvme_io": true 00:30:49.373 }, 00:30:49.373 "memory_domains": [ 00:30:49.373 { 00:30:49.373 "dma_device_id": "system", 00:30:49.373 "dma_device_type": 1 00:30:49.373 } 00:30:49.373 ], 00:30:49.373 "driver_specific": { 00:30:49.373 "nvme": [ 00:30:49.373 { 00:30:49.373 "trid": { 00:30:49.373 "trtype": "TCP", 00:30:49.373 "adrfam": "IPv4", 00:30:49.373 "traddr": "10.0.0.2", 00:30:49.373 "trsvcid": "4420", 00:30:49.373 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:49.373 }, 00:30:49.373 "ctrlr_data": { 00:30:49.373 "cntlid": 2, 00:30:49.373 "vendor_id": "0x8086", 00:30:49.373 "model_number": "SPDK bdev Controller", 00:30:49.373 "serial_number": "00000000000000000000", 00:30:49.373 "firmware_revision": "24.05", 00:30:49.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:49.373 "oacs": { 00:30:49.373 "security": 0, 00:30:49.373 "format": 0, 00:30:49.373 "firmware": 0, 00:30:49.373 "ns_manage": 0 00:30:49.373 }, 00:30:49.373 "multi_ctrlr": true, 00:30:49.373 "ana_reporting": false 00:30:49.373 }, 00:30:49.373 "vs": { 00:30:49.373 "nvme_version": "1.3" 00:30:49.373 }, 00:30:49.373 "ns_data": { 00:30:49.373 "id": 1, 00:30:49.373 "can_share": true 00:30:49.373 } 00:30:49.373 } 00:30:49.373 ], 00:30:49.373 "mp_policy": "active_passive" 00:30:49.373 } 00:30:49.373 } 00:30:49.373 ] 00:30:49.373 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.373 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:49.373 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.373 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.373 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.373 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:49.373 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.bAPmmfQ4T8 00:30:49.373 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:49.373 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.bAPmmfQ4T8 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.374 [2024-05-15 20:22:41.735256] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:49.374 [2024-05-15 20:22:41.735392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bAPmmfQ4T8 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.374 [2024-05-15 20:22:41.743270] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bAPmmfQ4T8 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.374 [2024-05-15 20:22:41.751295] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:49.374 [2024-05-15 20:22:41.751337] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:49.374 nvme0n1 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.374 [ 00:30:49.374 { 00:30:49.374 "name": "nvme0n1", 00:30:49.374 "aliases": [ 00:30:49.374 "a4e904bc-eeb5-4985-a7b2-731f57d9583e" 00:30:49.374 ], 00:30:49.374 "product_name": "NVMe disk", 00:30:49.374 "block_size": 512, 00:30:49.374 "num_blocks": 2097152, 00:30:49.374 "uuid": "a4e904bc-eeb5-4985-a7b2-731f57d9583e", 00:30:49.374 "assigned_rate_limits": { 00:30:49.374 "rw_ios_per_sec": 0, 00:30:49.374 "rw_mbytes_per_sec": 0, 00:30:49.374 "r_mbytes_per_sec": 0, 00:30:49.374 "w_mbytes_per_sec": 0 00:30:49.374 }, 00:30:49.374 "claimed": false, 00:30:49.374 "zoned": false, 00:30:49.374 "supported_io_types": { 00:30:49.374 "read": true, 00:30:49.374 "write": true, 00:30:49.374 "unmap": false, 00:30:49.374 "write_zeroes": true, 00:30:49.374 "flush": true, 00:30:49.374 "reset": true, 00:30:49.374 "compare": true, 00:30:49.374 "compare_and_write": true, 00:30:49.374 "abort": true, 00:30:49.374 "nvme_admin": true, 00:30:49.374 "nvme_io": true 00:30:49.374 }, 00:30:49.374 "memory_domains": [ 00:30:49.374 { 00:30:49.374 "dma_device_id": "system", 00:30:49.374 "dma_device_type": 1 00:30:49.374 } 00:30:49.374 ], 00:30:49.374 "driver_specific": { 00:30:49.374 "nvme": [ 00:30:49.374 { 00:30:49.374 "trid": { 00:30:49.374 "trtype": "TCP", 00:30:49.374 "adrfam": "IPv4", 00:30:49.374 "traddr": "10.0.0.2", 00:30:49.374 "trsvcid": "4421", 00:30:49.374 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:49.374 }, 00:30:49.374 "ctrlr_data": { 00:30:49.374 "cntlid": 3, 00:30:49.374 "vendor_id": "0x8086", 00:30:49.374 "model_number": "SPDK bdev Controller", 00:30:49.374 "serial_number": "00000000000000000000", 00:30:49.374 "firmware_revision": "24.05", 00:30:49.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:49.374 "oacs": { 00:30:49.374 "security": 0, 00:30:49.374 "format": 0, 00:30:49.374 "firmware": 0, 00:30:49.374 "ns_manage": 0 00:30:49.374 }, 00:30:49.374 "multi_ctrlr": true, 00:30:49.374 "ana_reporting": false 00:30:49.374 }, 00:30:49.374 "vs": { 00:30:49.374 "nvme_version": "1.3" 00:30:49.374 }, 00:30:49.374 "ns_data": { 00:30:49.374 "id": 1, 00:30:49.374 "can_share": true 00:30:49.374 } 00:30:49.374 } 00:30:49.374 ], 00:30:49.374 "mp_policy": "active_passive" 00:30:49.374 } 00:30:49.374 } 00:30:49.374 ] 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.bAPmmfQ4T8 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:49.374 20:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:49.636 rmmod nvme_tcp 00:30:49.636 rmmod nvme_fabrics 00:30:49.636 rmmod nvme_keyring 00:30:49.636 20:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:49.636 20:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:30:49.636 20:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:30:49.636 20:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 204983 ']' 00:30:49.636 20:22:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 204983 00:30:49.636 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 204983 ']' 00:30:49.636 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 204983 00:30:49.636 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:30:49.636 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:49.636 20:22:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 204983 00:30:49.636 20:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:49.636 20:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:49.636 20:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 204983' 00:30:49.636 killing process with pid 204983 00:30:49.636 20:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 204983 00:30:49.636 [2024-05-15 20:22:42.003146] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:49.636 [2024-05-15 20:22:42.003175] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:49.636 [2024-05-15 20:22:42.003183] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:49.636 20:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 204983 00:30:49.636 20:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:49.636 20:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:49.636 20:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:49.636 20:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:49.636 20:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:49.636 20:22:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.636 20:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:49.636 20:22:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.180 20:22:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:52.180 00:30:52.180 real 0m11.494s 00:30:52.180 user 0m4.049s 00:30:52.180 sys 0m5.958s 00:30:52.180 20:22:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:52.180 20:22:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:52.180 ************************************ 00:30:52.180 END TEST nvmf_async_init 00:30:52.180 ************************************ 00:30:52.180 20:22:44 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:52.180 20:22:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:52.180 20:22:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:52.180 20:22:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:52.180 ************************************ 00:30:52.180 START TEST dma 00:30:52.180 ************************************ 00:30:52.180 20:22:44 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:52.180 * Looking for test storage... 00:30:52.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:52.180 20:22:44 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.180 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:30:52.180 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.180 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.180 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.180 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.180 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.180 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.180 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.180 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.180 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.180 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.180 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:52.180 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:52.181 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.181 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.181 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.181 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.181 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.181 20:22:44 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.181 20:22:44 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.181 20:22:44 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.181 20:22:44 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.181 20:22:44 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.181 20:22:44 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.181 20:22:44 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:30:52.181 20:22:44 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.181 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:30:52.181 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:52.181 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:52.181 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.181 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.181 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.181 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:52.181 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:52.181 20:22:44 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:52.181 20:22:44 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:52.181 20:22:44 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:30:52.181 00:30:52.181 real 0m0.125s 00:30:52.181 user 0m0.066s 00:30:52.181 sys 0m0.068s 00:30:52.181 20:22:44 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:52.181 20:22:44 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:30:52.181 ************************************ 00:30:52.181 END TEST dma 00:30:52.181 ************************************ 00:30:52.181 20:22:44 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:52.181 20:22:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:52.181 20:22:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:52.181 20:22:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:52.181 ************************************ 00:30:52.181 START TEST nvmf_identify 00:30:52.181 ************************************ 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:52.181 * Looking for test storage... 00:30:52.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:52.181 20:22:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:52.182 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:52.182 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:52.182 20:22:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:30:52.182 20:22:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:00.323 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:00.323 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:00.323 Found net devices under 0000:31:00.0: cvl_0_0 00:31:00.323 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:00.324 Found net devices under 0000:31:00.1: cvl_0_1 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:00.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:31:00.324 00:31:00.324 --- 10.0.0.2 ping statistics --- 00:31:00.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.324 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:00.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:31:00.324 00:31:00.324 --- 10.0.0.1 ping statistics --- 00:31:00.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.324 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=209974 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 209974 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 209974 ']' 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:00.324 20:22:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:00.324 [2024-05-15 20:22:52.716126] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:31:00.324 [2024-05-15 20:22:52.716213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.324 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.324 [2024-05-15 20:22:52.814001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:00.587 [2024-05-15 20:22:52.912647] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.587 [2024-05-15 20:22:52.912707] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.587 [2024-05-15 20:22:52.912716] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.587 [2024-05-15 20:22:52.912723] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.587 [2024-05-15 20:22:52.912729] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.587 [2024-05-15 20:22:52.912858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.587 [2024-05-15 20:22:52.912987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:00.587 [2024-05-15 20:22:52.913126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.587 [2024-05-15 20:22:52.913127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:01.160 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:01.160 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:31:01.160 20:22:53 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:01.160 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.160 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.160 [2024-05-15 20:22:53.601943] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.160 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.160 20:22:53 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:01.160 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:01.160 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.160 20:22:53 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:01.160 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.160 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.423 Malloc0 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.423 [2024-05-15 20:22:53.701185] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:01.423 [2024-05-15 20:22:53.701405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.423 [ 00:31:01.423 { 00:31:01.423 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:01.423 "subtype": "Discovery", 00:31:01.423 "listen_addresses": [ 00:31:01.423 { 00:31:01.423 "trtype": "TCP", 00:31:01.423 "adrfam": "IPv4", 00:31:01.423 "traddr": "10.0.0.2", 00:31:01.423 "trsvcid": "4420" 00:31:01.423 } 00:31:01.423 ], 00:31:01.423 "allow_any_host": true, 00:31:01.423 "hosts": [] 00:31:01.423 }, 00:31:01.423 { 00:31:01.423 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:01.423 "subtype": "NVMe", 00:31:01.423 "listen_addresses": [ 00:31:01.423 { 00:31:01.423 "trtype": "TCP", 00:31:01.423 "adrfam": "IPv4", 00:31:01.423 "traddr": "10.0.0.2", 00:31:01.423 "trsvcid": "4420" 00:31:01.423 } 00:31:01.423 ], 00:31:01.423 "allow_any_host": true, 00:31:01.423 "hosts": [], 00:31:01.423 "serial_number": "SPDK00000000000001", 00:31:01.423 "model_number": "SPDK bdev Controller", 00:31:01.423 "max_namespaces": 32, 00:31:01.423 "min_cntlid": 1, 00:31:01.423 "max_cntlid": 65519, 00:31:01.423 "namespaces": [ 00:31:01.423 { 00:31:01.423 "nsid": 1, 00:31:01.423 "bdev_name": "Malloc0", 00:31:01.423 "name": "Malloc0", 00:31:01.423 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:01.423 "eui64": "ABCDEF0123456789", 00:31:01.423 "uuid": "1069e988-a77a-4589-93e5-1f4bf0b7c949" 00:31:01.423 } 00:31:01.423 ] 00:31:01.423 } 00:31:01.423 ] 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.423 20:22:53 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:01.423 [2024-05-15 20:22:53.762805] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:31:01.424 [2024-05-15 20:22:53.762847] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid210324 ] 00:31:01.424 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.424 [2024-05-15 20:22:53.793965] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:31:01.424 [2024-05-15 20:22:53.794013] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:01.424 [2024-05-15 20:22:53.794018] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:01.424 [2024-05-15 20:22:53.794029] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:01.424 [2024-05-15 20:22:53.794036] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:01.424 [2024-05-15 20:22:53.797345] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:31:01.424 [2024-05-15 20:22:53.797376] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e94c30 0 00:31:01.424 [2024-05-15 20:22:53.805318] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:01.424 [2024-05-15 20:22:53.805330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:01.424 [2024-05-15 20:22:53.805336] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:01.424 [2024-05-15 20:22:53.805340] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:01.424 [2024-05-15 20:22:53.805377] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.805383] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.805387] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e94c30) 00:31:01.424 [2024-05-15 20:22:53.805399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:01.424 [2024-05-15 20:22:53.805415] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efc980, cid 0, qid 0 00:31:01.424 [2024-05-15 20:22:53.813324] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.424 [2024-05-15 20:22:53.813333] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.424 [2024-05-15 20:22:53.813337] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.813341] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efc980) on tqpair=0x1e94c30 00:31:01.424 [2024-05-15 20:22:53.813354] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:01.424 [2024-05-15 20:22:53.813360] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:31:01.424 [2024-05-15 20:22:53.813365] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:31:01.424 [2024-05-15 20:22:53.813376] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.813380] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.813384] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e94c30) 00:31:01.424 [2024-05-15 20:22:53.813391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.424 [2024-05-15 20:22:53.813403] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efc980, cid 0, qid 0 00:31:01.424 [2024-05-15 20:22:53.813634] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.424 [2024-05-15 20:22:53.813641] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.424 [2024-05-15 20:22:53.813644] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.813648] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efc980) on tqpair=0x1e94c30 00:31:01.424 [2024-05-15 20:22:53.813654] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:31:01.424 [2024-05-15 20:22:53.813661] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:31:01.424 [2024-05-15 20:22:53.813667] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.813671] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.813674] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e94c30) 00:31:01.424 [2024-05-15 20:22:53.813681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.424 [2024-05-15 20:22:53.813691] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efc980, cid 0, qid 0 00:31:01.424 [2024-05-15 20:22:53.813912] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.424 [2024-05-15 20:22:53.813918] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.424 [2024-05-15 20:22:53.813922] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.813925] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efc980) on tqpair=0x1e94c30 00:31:01.424 [2024-05-15 20:22:53.813931] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:31:01.424 [2024-05-15 20:22:53.813939] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:31:01.424 [2024-05-15 20:22:53.813948] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.813952] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.813956] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e94c30) 00:31:01.424 [2024-05-15 20:22:53.813962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.424 [2024-05-15 20:22:53.813972] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efc980, cid 0, qid 0 00:31:01.424 [2024-05-15 20:22:53.814190] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.424 [2024-05-15 20:22:53.814197] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.424 [2024-05-15 20:22:53.814200] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.814204] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efc980) on tqpair=0x1e94c30 00:31:01.424 [2024-05-15 20:22:53.814209] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:01.424 [2024-05-15 20:22:53.814219] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.814222] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.814226] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e94c30) 00:31:01.424 [2024-05-15 20:22:53.814232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.424 [2024-05-15 20:22:53.814242] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efc980, cid 0, qid 0 00:31:01.424 [2024-05-15 20:22:53.814447] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.424 [2024-05-15 20:22:53.814453] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.424 [2024-05-15 20:22:53.814457] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.814461] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efc980) on tqpair=0x1e94c30 00:31:01.424 [2024-05-15 20:22:53.814466] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:31:01.424 [2024-05-15 20:22:53.814470] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:31:01.424 [2024-05-15 20:22:53.814478] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:01.424 [2024-05-15 20:22:53.814583] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:31:01.424 [2024-05-15 20:22:53.814587] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:01.424 [2024-05-15 20:22:53.814596] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.814600] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.814604] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e94c30) 00:31:01.424 [2024-05-15 20:22:53.814610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.424 [2024-05-15 20:22:53.814620] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efc980, cid 0, qid 0 00:31:01.424 [2024-05-15 20:22:53.814836] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.424 [2024-05-15 20:22:53.814842] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.424 [2024-05-15 20:22:53.814845] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.814849] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efc980) on tqpair=0x1e94c30 00:31:01.424 [2024-05-15 20:22:53.814857] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:01.424 [2024-05-15 20:22:53.814866] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.814870] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.814873] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e94c30) 00:31:01.424 [2024-05-15 20:22:53.814879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.424 [2024-05-15 20:22:53.814889] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efc980, cid 0, qid 0 00:31:01.424 [2024-05-15 20:22:53.815104] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.424 [2024-05-15 20:22:53.815110] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.424 [2024-05-15 20:22:53.815114] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.815118] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efc980) on tqpair=0x1e94c30 00:31:01.424 [2024-05-15 20:22:53.815123] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:01.424 [2024-05-15 20:22:53.815127] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:31:01.424 [2024-05-15 20:22:53.815135] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:31:01.424 [2024-05-15 20:22:53.815143] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:31:01.424 [2024-05-15 20:22:53.815151] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.815155] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e94c30) 00:31:01.424 [2024-05-15 20:22:53.815162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.424 [2024-05-15 20:22:53.815171] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efc980, cid 0, qid 0 00:31:01.424 [2024-05-15 20:22:53.815414] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.424 [2024-05-15 20:22:53.815421] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.424 [2024-05-15 20:22:53.815424] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.424 [2024-05-15 20:22:53.815428] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e94c30): datao=0, datal=4096, cccid=0 00:31:01.425 [2024-05-15 20:22:53.815433] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1efc980) on tqpair(0x1e94c30): expected_datao=0, payload_size=4096 00:31:01.425 [2024-05-15 20:22:53.815437] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.815509] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.815513] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.856511] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.425 [2024-05-15 20:22:53.856520] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.425 [2024-05-15 20:22:53.856524] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.856527] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efc980) on tqpair=0x1e94c30 00:31:01.425 [2024-05-15 20:22:53.856536] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:31:01.425 [2024-05-15 20:22:53.856542] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:31:01.425 [2024-05-15 20:22:53.856546] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:31:01.425 [2024-05-15 20:22:53.856554] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:31:01.425 [2024-05-15 20:22:53.856558] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:31:01.425 [2024-05-15 20:22:53.856563] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:31:01.425 [2024-05-15 20:22:53.856574] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:31:01.425 [2024-05-15 20:22:53.856583] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.856587] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.856591] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e94c30) 00:31:01.425 [2024-05-15 20:22:53.856598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:01.425 [2024-05-15 20:22:53.856610] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efc980, cid 0, qid 0 00:31:01.425 [2024-05-15 20:22:53.856790] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.425 [2024-05-15 20:22:53.856796] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.425 [2024-05-15 20:22:53.856799] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.856803] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efc980) on tqpair=0x1e94c30 00:31:01.425 [2024-05-15 20:22:53.856811] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.856815] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.856818] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e94c30) 00:31:01.425 [2024-05-15 20:22:53.856824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.425 [2024-05-15 20:22:53.856830] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.856833] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.856837] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e94c30) 00:31:01.425 [2024-05-15 20:22:53.856842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.425 [2024-05-15 20:22:53.856848] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.856852] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.856855] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e94c30) 00:31:01.425 [2024-05-15 20:22:53.856861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.425 [2024-05-15 20:22:53.856866] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.856870] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.856873] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.425 [2024-05-15 20:22:53.856879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.425 [2024-05-15 20:22:53.856883] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:31:01.425 [2024-05-15 20:22:53.856893] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:01.425 [2024-05-15 20:22:53.856900] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.856903] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e94c30) 00:31:01.425 [2024-05-15 20:22:53.856912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.425 [2024-05-15 20:22:53.856924] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efc980, cid 0, qid 0 00:31:01.425 [2024-05-15 20:22:53.856929] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcae0, cid 1, qid 0 00:31:01.425 [2024-05-15 20:22:53.856933] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcc40, cid 2, qid 0 00:31:01.425 [2024-05-15 20:22:53.856938] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.425 [2024-05-15 20:22:53.856943] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcf00, cid 4, qid 0 00:31:01.425 [2024-05-15 20:22:53.857248] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.425 [2024-05-15 20:22:53.857254] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.425 [2024-05-15 20:22:53.857258] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.857261] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcf00) on tqpair=0x1e94c30 00:31:01.425 [2024-05-15 20:22:53.857267] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:31:01.425 [2024-05-15 20:22:53.857271] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:31:01.425 [2024-05-15 20:22:53.857282] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.857286] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e94c30) 00:31:01.425 [2024-05-15 20:22:53.857292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.425 [2024-05-15 20:22:53.857302] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcf00, cid 4, qid 0 00:31:01.425 [2024-05-15 20:22:53.861320] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.425 [2024-05-15 20:22:53.861327] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.425 [2024-05-15 20:22:53.861331] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.861334] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e94c30): datao=0, datal=4096, cccid=4 00:31:01.425 [2024-05-15 20:22:53.861339] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1efcf00) on tqpair(0x1e94c30): expected_datao=0, payload_size=4096 00:31:01.425 [2024-05-15 20:22:53.861343] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.861349] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.861353] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.861359] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.425 [2024-05-15 20:22:53.861365] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.425 [2024-05-15 20:22:53.861368] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.861372] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcf00) on tqpair=0x1e94c30 00:31:01.425 [2024-05-15 20:22:53.861384] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:31:01.425 [2024-05-15 20:22:53.861407] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.861412] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e94c30) 00:31:01.425 [2024-05-15 20:22:53.861418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.425 [2024-05-15 20:22:53.861425] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.861428] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.861434] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e94c30) 00:31:01.425 [2024-05-15 20:22:53.861440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.425 [2024-05-15 20:22:53.861454] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcf00, cid 4, qid 0 00:31:01.425 [2024-05-15 20:22:53.861459] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efd060, cid 5, qid 0 00:31:01.425 [2024-05-15 20:22:53.861718] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.425 [2024-05-15 20:22:53.861724] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.425 [2024-05-15 20:22:53.861728] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.861731] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e94c30): datao=0, datal=1024, cccid=4 00:31:01.425 [2024-05-15 20:22:53.861736] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1efcf00) on tqpair(0x1e94c30): expected_datao=0, payload_size=1024 00:31:01.425 [2024-05-15 20:22:53.861740] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.861746] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.861750] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.861755] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.425 [2024-05-15 20:22:53.861761] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.425 [2024-05-15 20:22:53.861764] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.861768] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efd060) on tqpair=0x1e94c30 00:31:01.425 [2024-05-15 20:22:53.902526] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.425 [2024-05-15 20:22:53.902536] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.425 [2024-05-15 20:22:53.902540] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.902544] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcf00) on tqpair=0x1e94c30 00:31:01.425 [2024-05-15 20:22:53.902556] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.425 [2024-05-15 20:22:53.902560] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e94c30) 00:31:01.425 [2024-05-15 20:22:53.902567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.425 [2024-05-15 20:22:53.902581] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcf00, cid 4, qid 0 00:31:01.425 [2024-05-15 20:22:53.902825] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.425 [2024-05-15 20:22:53.902832] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.426 [2024-05-15 20:22:53.902835] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.426 [2024-05-15 20:22:53.902839] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e94c30): datao=0, datal=3072, cccid=4 00:31:01.426 [2024-05-15 20:22:53.902843] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1efcf00) on tqpair(0x1e94c30): expected_datao=0, payload_size=3072 00:31:01.426 [2024-05-15 20:22:53.902847] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.426 [2024-05-15 20:22:53.902884] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.426 [2024-05-15 20:22:53.902888] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.691 [2024-05-15 20:22:53.943523] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.691 [2024-05-15 20:22:53.943534] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.691 [2024-05-15 20:22:53.943537] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.691 [2024-05-15 20:22:53.943541] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcf00) on tqpair=0x1e94c30 00:31:01.691 [2024-05-15 20:22:53.943558] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.691 [2024-05-15 20:22:53.943562] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e94c30) 00:31:01.691 [2024-05-15 20:22:53.943569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.691 [2024-05-15 20:22:53.943583] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcf00, cid 4, qid 0 00:31:01.691 [2024-05-15 20:22:53.943825] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.691 [2024-05-15 20:22:53.943831] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.691 [2024-05-15 20:22:53.943834] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.691 [2024-05-15 20:22:53.943838] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e94c30): datao=0, datal=8, cccid=4 00:31:01.691 [2024-05-15 20:22:53.943842] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1efcf00) on tqpair(0x1e94c30): expected_datao=0, payload_size=8 00:31:01.691 [2024-05-15 20:22:53.943846] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.691 [2024-05-15 20:22:53.943853] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.691 [2024-05-15 20:22:53.943856] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.691 [2024-05-15 20:22:53.988320] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.691 [2024-05-15 20:22:53.988328] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.691 [2024-05-15 20:22:53.988332] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.691 [2024-05-15 20:22:53.988336] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcf00) on tqpair=0x1e94c30 00:31:01.691 ===================================================== 00:31:01.691 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:01.691 ===================================================== 00:31:01.691 Controller Capabilities/Features 00:31:01.691 ================================ 00:31:01.691 Vendor ID: 0000 00:31:01.691 Subsystem Vendor ID: 0000 00:31:01.691 Serial Number: .................... 00:31:01.692 Model Number: ........................................ 00:31:01.692 Firmware Version: 24.05 00:31:01.692 Recommended Arb Burst: 0 00:31:01.692 IEEE OUI Identifier: 00 00 00 00:31:01.692 Multi-path I/O 00:31:01.692 May have multiple subsystem ports: No 00:31:01.692 May have multiple controllers: No 00:31:01.692 Associated with SR-IOV VF: No 00:31:01.692 Max Data Transfer Size: 131072 00:31:01.692 Max Number of Namespaces: 0 00:31:01.692 Max Number of I/O Queues: 1024 00:31:01.692 NVMe Specification Version (VS): 1.3 00:31:01.692 NVMe Specification Version (Identify): 1.3 00:31:01.692 Maximum Queue Entries: 128 00:31:01.692 Contiguous Queues Required: Yes 00:31:01.692 Arbitration Mechanisms Supported 00:31:01.692 Weighted Round Robin: Not Supported 00:31:01.692 Vendor Specific: Not Supported 00:31:01.692 Reset Timeout: 15000 ms 00:31:01.692 Doorbell Stride: 4 bytes 00:31:01.692 NVM Subsystem Reset: Not Supported 00:31:01.692 Command Sets Supported 00:31:01.692 NVM Command Set: Supported 00:31:01.692 Boot Partition: Not Supported 00:31:01.692 Memory Page Size Minimum: 4096 bytes 00:31:01.692 Memory Page Size Maximum: 4096 bytes 00:31:01.692 Persistent Memory Region: Not Supported 00:31:01.692 Optional Asynchronous Events Supported 00:31:01.692 Namespace Attribute Notices: Not Supported 00:31:01.692 Firmware Activation Notices: Not Supported 00:31:01.692 ANA Change Notices: Not Supported 00:31:01.692 PLE Aggregate Log Change Notices: Not Supported 00:31:01.692 LBA Status Info Alert Notices: Not Supported 00:31:01.692 EGE Aggregate Log Change Notices: Not Supported 00:31:01.692 Normal NVM Subsystem Shutdown event: Not Supported 00:31:01.692 Zone Descriptor Change Notices: Not Supported 00:31:01.692 Discovery Log Change Notices: Supported 00:31:01.692 Controller Attributes 00:31:01.692 128-bit Host Identifier: Not Supported 00:31:01.692 Non-Operational Permissive Mode: Not Supported 00:31:01.692 NVM Sets: Not Supported 00:31:01.692 Read Recovery Levels: Not Supported 00:31:01.692 Endurance Groups: Not Supported 00:31:01.692 Predictable Latency Mode: Not Supported 00:31:01.692 Traffic Based Keep ALive: Not Supported 00:31:01.692 Namespace Granularity: Not Supported 00:31:01.692 SQ Associations: Not Supported 00:31:01.692 UUID List: Not Supported 00:31:01.692 Multi-Domain Subsystem: Not Supported 00:31:01.692 Fixed Capacity Management: Not Supported 00:31:01.692 Variable Capacity Management: Not Supported 00:31:01.692 Delete Endurance Group: Not Supported 00:31:01.692 Delete NVM Set: Not Supported 00:31:01.692 Extended LBA Formats Supported: Not Supported 00:31:01.692 Flexible Data Placement Supported: Not Supported 00:31:01.692 00:31:01.692 Controller Memory Buffer Support 00:31:01.692 ================================ 00:31:01.692 Supported: No 00:31:01.692 00:31:01.692 Persistent Memory Region Support 00:31:01.692 ================================ 00:31:01.692 Supported: No 00:31:01.692 00:31:01.692 Admin Command Set Attributes 00:31:01.692 ============================ 00:31:01.692 Security Send/Receive: Not Supported 00:31:01.692 Format NVM: Not Supported 00:31:01.692 Firmware Activate/Download: Not Supported 00:31:01.692 Namespace Management: Not Supported 00:31:01.692 Device Self-Test: Not Supported 00:31:01.692 Directives: Not Supported 00:31:01.692 NVMe-MI: Not Supported 00:31:01.692 Virtualization Management: Not Supported 00:31:01.692 Doorbell Buffer Config: Not Supported 00:31:01.692 Get LBA Status Capability: Not Supported 00:31:01.692 Command & Feature Lockdown Capability: Not Supported 00:31:01.692 Abort Command Limit: 1 00:31:01.692 Async Event Request Limit: 4 00:31:01.692 Number of Firmware Slots: N/A 00:31:01.692 Firmware Slot 1 Read-Only: N/A 00:31:01.692 Firmware Activation Without Reset: N/A 00:31:01.692 Multiple Update Detection Support: N/A 00:31:01.692 Firmware Update Granularity: No Information Provided 00:31:01.692 Per-Namespace SMART Log: No 00:31:01.692 Asymmetric Namespace Access Log Page: Not Supported 00:31:01.692 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:01.692 Command Effects Log Page: Not Supported 00:31:01.692 Get Log Page Extended Data: Supported 00:31:01.692 Telemetry Log Pages: Not Supported 00:31:01.692 Persistent Event Log Pages: Not Supported 00:31:01.692 Supported Log Pages Log Page: May Support 00:31:01.692 Commands Supported & Effects Log Page: Not Supported 00:31:01.692 Feature Identifiers & Effects Log Page:May Support 00:31:01.692 NVMe-MI Commands & Effects Log Page: May Support 00:31:01.692 Data Area 4 for Telemetry Log: Not Supported 00:31:01.692 Error Log Page Entries Supported: 128 00:31:01.692 Keep Alive: Not Supported 00:31:01.692 00:31:01.692 NVM Command Set Attributes 00:31:01.692 ========================== 00:31:01.692 Submission Queue Entry Size 00:31:01.692 Max: 1 00:31:01.692 Min: 1 00:31:01.692 Completion Queue Entry Size 00:31:01.692 Max: 1 00:31:01.692 Min: 1 00:31:01.692 Number of Namespaces: 0 00:31:01.692 Compare Command: Not Supported 00:31:01.692 Write Uncorrectable Command: Not Supported 00:31:01.692 Dataset Management Command: Not Supported 00:31:01.692 Write Zeroes Command: Not Supported 00:31:01.692 Set Features Save Field: Not Supported 00:31:01.692 Reservations: Not Supported 00:31:01.692 Timestamp: Not Supported 00:31:01.692 Copy: Not Supported 00:31:01.692 Volatile Write Cache: Not Present 00:31:01.692 Atomic Write Unit (Normal): 1 00:31:01.692 Atomic Write Unit (PFail): 1 00:31:01.692 Atomic Compare & Write Unit: 1 00:31:01.692 Fused Compare & Write: Supported 00:31:01.692 Scatter-Gather List 00:31:01.692 SGL Command Set: Supported 00:31:01.692 SGL Keyed: Supported 00:31:01.692 SGL Bit Bucket Descriptor: Not Supported 00:31:01.692 SGL Metadata Pointer: Not Supported 00:31:01.692 Oversized SGL: Not Supported 00:31:01.692 SGL Metadata Address: Not Supported 00:31:01.692 SGL Offset: Supported 00:31:01.692 Transport SGL Data Block: Not Supported 00:31:01.692 Replay Protected Memory Block: Not Supported 00:31:01.692 00:31:01.692 Firmware Slot Information 00:31:01.692 ========================= 00:31:01.692 Active slot: 0 00:31:01.692 00:31:01.692 00:31:01.692 Error Log 00:31:01.692 ========= 00:31:01.692 00:31:01.692 Active Namespaces 00:31:01.692 ================= 00:31:01.692 Discovery Log Page 00:31:01.692 ================== 00:31:01.692 Generation Counter: 2 00:31:01.692 Number of Records: 2 00:31:01.692 Record Format: 0 00:31:01.692 00:31:01.692 Discovery Log Entry 0 00:31:01.692 ---------------------- 00:31:01.692 Transport Type: 3 (TCP) 00:31:01.692 Address Family: 1 (IPv4) 00:31:01.692 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:01.692 Entry Flags: 00:31:01.692 Duplicate Returned Information: 1 00:31:01.692 Explicit Persistent Connection Support for Discovery: 1 00:31:01.692 Transport Requirements: 00:31:01.692 Secure Channel: Not Required 00:31:01.692 Port ID: 0 (0x0000) 00:31:01.692 Controller ID: 65535 (0xffff) 00:31:01.692 Admin Max SQ Size: 128 00:31:01.692 Transport Service Identifier: 4420 00:31:01.692 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:01.692 Transport Address: 10.0.0.2 00:31:01.692 Discovery Log Entry 1 00:31:01.692 ---------------------- 00:31:01.692 Transport Type: 3 (TCP) 00:31:01.692 Address Family: 1 (IPv4) 00:31:01.692 Subsystem Type: 2 (NVM Subsystem) 00:31:01.692 Entry Flags: 00:31:01.692 Duplicate Returned Information: 0 00:31:01.692 Explicit Persistent Connection Support for Discovery: 0 00:31:01.692 Transport Requirements: 00:31:01.692 Secure Channel: Not Required 00:31:01.692 Port ID: 0 (0x0000) 00:31:01.692 Controller ID: 65535 (0xffff) 00:31:01.692 Admin Max SQ Size: 128 00:31:01.692 Transport Service Identifier: 4420 00:31:01.692 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:01.692 Transport Address: 10.0.0.2 [2024-05-15 20:22:53.988423] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:31:01.692 [2024-05-15 20:22:53.988435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.692 [2024-05-15 20:22:53.988442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.692 [2024-05-15 20:22:53.988448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.692 [2024-05-15 20:22:53.988454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.692 [2024-05-15 20:22:53.988462] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.692 [2024-05-15 20:22:53.988466] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.692 [2024-05-15 20:22:53.988469] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.692 [2024-05-15 20:22:53.988476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.693 [2024-05-15 20:22:53.988490] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.693 [2024-05-15 20:22:53.988782] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-05-15 20:22:53.988788] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-05-15 20:22:53.988792] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.988795] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcda0) on tqpair=0x1e94c30 00:31:01.693 [2024-05-15 20:22:53.988803] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.988806] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.988810] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.693 [2024-05-15 20:22:53.988816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.693 [2024-05-15 20:22:53.988829] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.693 [2024-05-15 20:22:53.989078] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-05-15 20:22:53.989084] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-05-15 20:22:53.989088] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.989091] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcda0) on tqpair=0x1e94c30 00:31:01.693 [2024-05-15 20:22:53.989097] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:31:01.693 [2024-05-15 20:22:53.989101] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:31:01.693 [2024-05-15 20:22:53.989110] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.989114] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.989117] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.693 [2024-05-15 20:22:53.989124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.693 [2024-05-15 20:22:53.989133] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.693 [2024-05-15 20:22:53.989380] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-05-15 20:22:53.989387] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-05-15 20:22:53.989390] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.989394] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcda0) on tqpair=0x1e94c30 00:31:01.693 [2024-05-15 20:22:53.989404] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.989408] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.989412] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.693 [2024-05-15 20:22:53.989418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.693 [2024-05-15 20:22:53.989428] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.693 [2024-05-15 20:22:53.989644] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-05-15 20:22:53.989650] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-05-15 20:22:53.989654] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.989657] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcda0) on tqpair=0x1e94c30 00:31:01.693 [2024-05-15 20:22:53.989667] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.989671] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.989675] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.693 [2024-05-15 20:22:53.989681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.693 [2024-05-15 20:22:53.989690] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.693 [2024-05-15 20:22:53.989936] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-05-15 20:22:53.989942] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-05-15 20:22:53.989946] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.989949] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcda0) on tqpair=0x1e94c30 00:31:01.693 [2024-05-15 20:22:53.989960] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.989964] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.989967] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.693 [2024-05-15 20:22:53.989975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.693 [2024-05-15 20:22:53.989985] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.693 [2024-05-15 20:22:53.990237] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-05-15 20:22:53.990243] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-05-15 20:22:53.990247] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.990250] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcda0) on tqpair=0x1e94c30 00:31:01.693 [2024-05-15 20:22:53.990260] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.990264] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.990267] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.693 [2024-05-15 20:22:53.990274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.693 [2024-05-15 20:22:53.990283] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.693 [2024-05-15 20:22:53.990489] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-05-15 20:22:53.990496] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-05-15 20:22:53.990499] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.990503] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcda0) on tqpair=0x1e94c30 00:31:01.693 [2024-05-15 20:22:53.990513] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.990517] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.990520] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.693 [2024-05-15 20:22:53.990527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.693 [2024-05-15 20:22:53.990536] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.693 [2024-05-15 20:22:53.990752] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-05-15 20:22:53.990758] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-05-15 20:22:53.990762] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.990765] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcda0) on tqpair=0x1e94c30 00:31:01.693 [2024-05-15 20:22:53.990775] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.990779] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.990783] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.693 [2024-05-15 20:22:53.990789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.693 [2024-05-15 20:22:53.990798] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.693 [2024-05-15 20:22:53.990994] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-05-15 20:22:53.991000] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-05-15 20:22:53.991003] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.991007] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcda0) on tqpair=0x1e94c30 00:31:01.693 [2024-05-15 20:22:53.991017] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.991021] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.991024] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.693 [2024-05-15 20:22:53.991031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.693 [2024-05-15 20:22:53.991042] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.693 [2024-05-15 20:22:53.991246] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-05-15 20:22:53.991252] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-05-15 20:22:53.991256] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.991259] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcda0) on tqpair=0x1e94c30 00:31:01.693 [2024-05-15 20:22:53.991269] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.991273] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.991277] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.693 [2024-05-15 20:22:53.991283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.693 [2024-05-15 20:22:53.991292] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.693 [2024-05-15 20:22:53.991497] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-05-15 20:22:53.991503] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-05-15 20:22:53.991507] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.991510] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcda0) on tqpair=0x1e94c30 00:31:01.693 [2024-05-15 20:22:53.991520] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.991524] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.991527] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.693 [2024-05-15 20:22:53.991534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.693 [2024-05-15 20:22:53.991543] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.693 [2024-05-15 20:22:53.991745] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.693 [2024-05-15 20:22:53.991751] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.693 [2024-05-15 20:22:53.991754] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.991758] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcda0) on tqpair=0x1e94c30 00:31:01.693 [2024-05-15 20:22:53.991768] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.991772] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.693 [2024-05-15 20:22:53.991775] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.693 [2024-05-15 20:22:53.991782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-05-15 20:22:53.991791] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.694 [2024-05-15 20:22:53.992001] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.694 [2024-05-15 20:22:53.992008] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.694 [2024-05-15 20:22:53.992011] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:53.992014] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcda0) on tqpair=0x1e94c30 00:31:01.694 [2024-05-15 20:22:53.992025] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:53.992028] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:53.992032] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.694 [2024-05-15 20:22:53.992038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-05-15 20:22:53.992049] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.694 [2024-05-15 20:22:53.992304] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.694 [2024-05-15 20:22:53.992310] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.694 [2024-05-15 20:22:53.996319] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:53.996331] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcda0) on tqpair=0x1e94c30 00:31:01.694 [2024-05-15 20:22:53.996342] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:53.996346] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:53.996349] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e94c30) 00:31:01.694 [2024-05-15 20:22:53.996356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-05-15 20:22:53.996368] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1efcda0, cid 3, qid 0 00:31:01.694 [2024-05-15 20:22:53.996574] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.694 [2024-05-15 20:22:53.996581] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.694 [2024-05-15 20:22:53.996584] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:53.996588] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1efcda0) on tqpair=0x1e94c30 00:31:01.694 [2024-05-15 20:22:53.996596] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:31:01.694 00:31:01.694 20:22:54 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:01.694 [2024-05-15 20:22:54.032874] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:31:01.694 [2024-05-15 20:22:54.032922] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid210326 ] 00:31:01.694 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.694 [2024-05-15 20:22:54.065853] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:31:01.694 [2024-05-15 20:22:54.065897] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:01.694 [2024-05-15 20:22:54.065902] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:01.694 [2024-05-15 20:22:54.065912] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:01.694 [2024-05-15 20:22:54.065919] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:01.694 [2024-05-15 20:22:54.069341] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:31:01.694 [2024-05-15 20:22:54.069367] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b7fc30 0 00:31:01.694 [2024-05-15 20:22:54.077321] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:01.694 [2024-05-15 20:22:54.077332] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:01.694 [2024-05-15 20:22:54.077338] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:01.694 [2024-05-15 20:22:54.077342] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:01.694 [2024-05-15 20:22:54.077372] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.077378] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.077382] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc30) 00:31:01.694 [2024-05-15 20:22:54.077397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:01.694 [2024-05-15 20:22:54.077413] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7980, cid 0, qid 0 00:31:01.694 [2024-05-15 20:22:54.084322] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.694 [2024-05-15 20:22:54.084332] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.694 [2024-05-15 20:22:54.084335] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.084340] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7980) on tqpair=0x1b7fc30 00:31:01.694 [2024-05-15 20:22:54.084352] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:01.694 [2024-05-15 20:22:54.084358] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:31:01.694 [2024-05-15 20:22:54.084363] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:31:01.694 [2024-05-15 20:22:54.084374] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.084378] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.084381] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc30) 00:31:01.694 [2024-05-15 20:22:54.084389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-05-15 20:22:54.084401] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7980, cid 0, qid 0 00:31:01.694 [2024-05-15 20:22:54.084602] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.694 [2024-05-15 20:22:54.084609] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.694 [2024-05-15 20:22:54.084613] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.084616] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7980) on tqpair=0x1b7fc30 00:31:01.694 [2024-05-15 20:22:54.084622] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:31:01.694 [2024-05-15 20:22:54.084629] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:31:01.694 [2024-05-15 20:22:54.084636] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.084639] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.084643] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc30) 00:31:01.694 [2024-05-15 20:22:54.084650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-05-15 20:22:54.084660] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7980, cid 0, qid 0 00:31:01.694 [2024-05-15 20:22:54.084868] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.694 [2024-05-15 20:22:54.084874] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.694 [2024-05-15 20:22:54.084878] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.084881] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7980) on tqpair=0x1b7fc30 00:31:01.694 [2024-05-15 20:22:54.084887] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:31:01.694 [2024-05-15 20:22:54.084895] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:31:01.694 [2024-05-15 20:22:54.084901] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.084905] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.084908] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc30) 00:31:01.694 [2024-05-15 20:22:54.084917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-05-15 20:22:54.084927] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7980, cid 0, qid 0 00:31:01.694 [2024-05-15 20:22:54.085089] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.694 [2024-05-15 20:22:54.085096] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.694 [2024-05-15 20:22:54.085099] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.085103] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7980) on tqpair=0x1b7fc30 00:31:01.694 [2024-05-15 20:22:54.085108] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:01.694 [2024-05-15 20:22:54.085118] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.085121] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.085125] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc30) 00:31:01.694 [2024-05-15 20:22:54.085131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-05-15 20:22:54.085141] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7980, cid 0, qid 0 00:31:01.694 [2024-05-15 20:22:54.085317] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.694 [2024-05-15 20:22:54.085323] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.694 [2024-05-15 20:22:54.085327] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.085330] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7980) on tqpair=0x1b7fc30 00:31:01.694 [2024-05-15 20:22:54.085335] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:31:01.694 [2024-05-15 20:22:54.085340] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:31:01.694 [2024-05-15 20:22:54.085347] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:01.694 [2024-05-15 20:22:54.085453] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:31:01.694 [2024-05-15 20:22:54.085457] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:01.694 [2024-05-15 20:22:54.085464] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.085468] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.694 [2024-05-15 20:22:54.085471] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc30) 00:31:01.694 [2024-05-15 20:22:54.085478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.694 [2024-05-15 20:22:54.085488] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7980, cid 0, qid 0 00:31:01.695 [2024-05-15 20:22:54.085648] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.695 [2024-05-15 20:22:54.085654] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.695 [2024-05-15 20:22:54.085658] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.085661] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7980) on tqpair=0x1b7fc30 00:31:01.695 [2024-05-15 20:22:54.085667] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:01.695 [2024-05-15 20:22:54.085676] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.085680] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.085685] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc30) 00:31:01.695 [2024-05-15 20:22:54.085692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.695 [2024-05-15 20:22:54.085701] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7980, cid 0, qid 0 00:31:01.695 [2024-05-15 20:22:54.085847] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.695 [2024-05-15 20:22:54.085853] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.695 [2024-05-15 20:22:54.085857] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.085860] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7980) on tqpair=0x1b7fc30 00:31:01.695 [2024-05-15 20:22:54.085865] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:01.695 [2024-05-15 20:22:54.085870] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:31:01.695 [2024-05-15 20:22:54.085877] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:31:01.695 [2024-05-15 20:22:54.085885] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:31:01.695 [2024-05-15 20:22:54.085894] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.085897] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc30) 00:31:01.695 [2024-05-15 20:22:54.085904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.695 [2024-05-15 20:22:54.085914] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7980, cid 0, qid 0 00:31:01.695 [2024-05-15 20:22:54.086173] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.695 [2024-05-15 20:22:54.086179] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.695 [2024-05-15 20:22:54.086183] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.086186] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7fc30): datao=0, datal=4096, cccid=0 00:31:01.695 [2024-05-15 20:22:54.086191] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1be7980) on tqpair(0x1b7fc30): expected_datao=0, payload_size=4096 00:31:01.695 [2024-05-15 20:22:54.086196] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.086251] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.086255] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131320] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.695 [2024-05-15 20:22:54.131330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.695 [2024-05-15 20:22:54.131334] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131338] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7980) on tqpair=0x1b7fc30 00:31:01.695 [2024-05-15 20:22:54.131346] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:31:01.695 [2024-05-15 20:22:54.131351] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:31:01.695 [2024-05-15 20:22:54.131356] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:31:01.695 [2024-05-15 20:22:54.131360] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:31:01.695 [2024-05-15 20:22:54.131364] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:31:01.695 [2024-05-15 20:22:54.131369] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:31:01.695 [2024-05-15 20:22:54.131383] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:31:01.695 [2024-05-15 20:22:54.131391] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131395] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131399] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc30) 00:31:01.695 [2024-05-15 20:22:54.131406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:01.695 [2024-05-15 20:22:54.131418] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7980, cid 0, qid 0 00:31:01.695 [2024-05-15 20:22:54.131601] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.695 [2024-05-15 20:22:54.131607] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.695 [2024-05-15 20:22:54.131611] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131614] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7980) on tqpair=0x1b7fc30 00:31:01.695 [2024-05-15 20:22:54.131622] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131626] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131629] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b7fc30) 00:31:01.695 [2024-05-15 20:22:54.131635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.695 [2024-05-15 20:22:54.131641] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131645] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131648] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b7fc30) 00:31:01.695 [2024-05-15 20:22:54.131654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.695 [2024-05-15 20:22:54.131660] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131664] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131667] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b7fc30) 00:31:01.695 [2024-05-15 20:22:54.131673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.695 [2024-05-15 20:22:54.131679] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131682] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131686] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.695 [2024-05-15 20:22:54.131691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.695 [2024-05-15 20:22:54.131696] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:01.695 [2024-05-15 20:22:54.131706] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:01.695 [2024-05-15 20:22:54.131712] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131716] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7fc30) 00:31:01.695 [2024-05-15 20:22:54.131722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.695 [2024-05-15 20:22:54.131734] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7980, cid 0, qid 0 00:31:01.695 [2024-05-15 20:22:54.131739] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7ae0, cid 1, qid 0 00:31:01.695 [2024-05-15 20:22:54.131746] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7c40, cid 2, qid 0 00:31:01.695 [2024-05-15 20:22:54.131751] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.695 [2024-05-15 20:22:54.131755] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7f00, cid 4, qid 0 00:31:01.695 [2024-05-15 20:22:54.131944] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.695 [2024-05-15 20:22:54.131951] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.695 [2024-05-15 20:22:54.131954] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131958] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7f00) on tqpair=0x1b7fc30 00:31:01.695 [2024-05-15 20:22:54.131963] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:31:01.695 [2024-05-15 20:22:54.131968] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:01.695 [2024-05-15 20:22:54.131977] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:31:01.695 [2024-05-15 20:22:54.131984] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:01.695 [2024-05-15 20:22:54.131990] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131994] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.131997] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7fc30) 00:31:01.695 [2024-05-15 20:22:54.132003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:01.695 [2024-05-15 20:22:54.132013] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7f00, cid 4, qid 0 00:31:01.695 [2024-05-15 20:22:54.132136] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.695 [2024-05-15 20:22:54.132143] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.695 [2024-05-15 20:22:54.132146] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.132149] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7f00) on tqpair=0x1b7fc30 00:31:01.695 [2024-05-15 20:22:54.132204] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:31:01.695 [2024-05-15 20:22:54.132213] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:01.695 [2024-05-15 20:22:54.132221] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.695 [2024-05-15 20:22:54.132224] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7fc30) 00:31:01.695 [2024-05-15 20:22:54.132231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.695 [2024-05-15 20:22:54.132241] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7f00, cid 4, qid 0 00:31:01.695 [2024-05-15 20:22:54.132380] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.695 [2024-05-15 20:22:54.132387] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.695 [2024-05-15 20:22:54.132391] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.132395] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7fc30): datao=0, datal=4096, cccid=4 00:31:01.696 [2024-05-15 20:22:54.132399] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1be7f00) on tqpair(0x1b7fc30): expected_datao=0, payload_size=4096 00:31:01.696 [2024-05-15 20:22:54.132403] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.132410] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.132416] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.132561] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.696 [2024-05-15 20:22:54.132567] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.696 [2024-05-15 20:22:54.132570] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.132574] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7f00) on tqpair=0x1b7fc30 00:31:01.696 [2024-05-15 20:22:54.132585] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:31:01.696 [2024-05-15 20:22:54.132600] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:31:01.696 [2024-05-15 20:22:54.132608] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:31:01.696 [2024-05-15 20:22:54.132615] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.132619] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7fc30) 00:31:01.696 [2024-05-15 20:22:54.132625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.696 [2024-05-15 20:22:54.132636] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7f00, cid 4, qid 0 00:31:01.696 [2024-05-15 20:22:54.132840] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.696 [2024-05-15 20:22:54.132846] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.696 [2024-05-15 20:22:54.132850] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.132853] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7fc30): datao=0, datal=4096, cccid=4 00:31:01.696 [2024-05-15 20:22:54.132857] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1be7f00) on tqpair(0x1b7fc30): expected_datao=0, payload_size=4096 00:31:01.696 [2024-05-15 20:22:54.132861] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.132868] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.132871] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.133051] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.696 [2024-05-15 20:22:54.133058] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.696 [2024-05-15 20:22:54.133061] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.133065] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7f00) on tqpair=0x1b7fc30 00:31:01.696 [2024-05-15 20:22:54.133074] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:01.696 [2024-05-15 20:22:54.133083] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:01.696 [2024-05-15 20:22:54.133090] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.133094] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7fc30) 00:31:01.696 [2024-05-15 20:22:54.133100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.696 [2024-05-15 20:22:54.133110] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7f00, cid 4, qid 0 00:31:01.696 [2024-05-15 20:22:54.133348] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.696 [2024-05-15 20:22:54.133355] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.696 [2024-05-15 20:22:54.133358] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.133362] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7fc30): datao=0, datal=4096, cccid=4 00:31:01.696 [2024-05-15 20:22:54.133368] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1be7f00) on tqpair(0x1b7fc30): expected_datao=0, payload_size=4096 00:31:01.696 [2024-05-15 20:22:54.133373] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.133379] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.133383] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.133616] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.696 [2024-05-15 20:22:54.133622] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.696 [2024-05-15 20:22:54.133625] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.133629] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7f00) on tqpair=0x1b7fc30 00:31:01.696 [2024-05-15 20:22:54.133639] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:01.696 [2024-05-15 20:22:54.133646] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:31:01.696 [2024-05-15 20:22:54.133653] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:31:01.696 [2024-05-15 20:22:54.133659] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:01.696 [2024-05-15 20:22:54.133664] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:31:01.696 [2024-05-15 20:22:54.133669] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:31:01.696 [2024-05-15 20:22:54.133674] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:31:01.696 [2024-05-15 20:22:54.133679] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:31:01.696 [2024-05-15 20:22:54.133694] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.133698] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7fc30) 00:31:01.696 [2024-05-15 20:22:54.133705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.696 [2024-05-15 20:22:54.133711] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.133715] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.696 [2024-05-15 20:22:54.133718] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b7fc30) 00:31:01.696 [2024-05-15 20:22:54.133724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.697 [2024-05-15 20:22:54.133737] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7f00, cid 4, qid 0 00:31:01.697 [2024-05-15 20:22:54.133742] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be8060, cid 5, qid 0 00:31:01.697 [2024-05-15 20:22:54.133924] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-05-15 20:22:54.133930] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-05-15 20:22:54.133933] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.133937] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7f00) on tqpair=0x1b7fc30 00:31:01.697 [2024-05-15 20:22:54.133945] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-05-15 20:22:54.133950] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-05-15 20:22:54.133954] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.133957] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be8060) on tqpair=0x1b7fc30 00:31:01.697 [2024-05-15 20:22:54.133967] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.133972] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b7fc30) 00:31:01.697 [2024-05-15 20:22:54.133979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-05-15 20:22:54.133988] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be8060, cid 5, qid 0 00:31:01.697 [2024-05-15 20:22:54.134115] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-05-15 20:22:54.134121] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-05-15 20:22:54.134125] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134128] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be8060) on tqpair=0x1b7fc30 00:31:01.697 [2024-05-15 20:22:54.134138] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134141] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b7fc30) 00:31:01.697 [2024-05-15 20:22:54.134148] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-05-15 20:22:54.134157] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be8060, cid 5, qid 0 00:31:01.697 [2024-05-15 20:22:54.134338] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-05-15 20:22:54.134345] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-05-15 20:22:54.134348] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134352] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be8060) on tqpair=0x1b7fc30 00:31:01.697 [2024-05-15 20:22:54.134361] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134365] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b7fc30) 00:31:01.697 [2024-05-15 20:22:54.134371] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-05-15 20:22:54.134381] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be8060, cid 5, qid 0 00:31:01.697 [2024-05-15 20:22:54.134510] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-05-15 20:22:54.134516] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-05-15 20:22:54.134520] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134523] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be8060) on tqpair=0x1b7fc30 00:31:01.697 [2024-05-15 20:22:54.134535] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134540] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b7fc30) 00:31:01.697 [2024-05-15 20:22:54.134546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-05-15 20:22:54.134553] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134557] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b7fc30) 00:31:01.697 [2024-05-15 20:22:54.134563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-05-15 20:22:54.134570] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134574] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1b7fc30) 00:31:01.697 [2024-05-15 20:22:54.134580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-05-15 20:22:54.134589] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134594] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b7fc30) 00:31:01.697 [2024-05-15 20:22:54.134601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.697 [2024-05-15 20:22:54.134611] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be8060, cid 5, qid 0 00:31:01.697 [2024-05-15 20:22:54.134616] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7f00, cid 4, qid 0 00:31:01.697 [2024-05-15 20:22:54.134621] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be81c0, cid 6, qid 0 00:31:01.697 [2024-05-15 20:22:54.134625] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be8320, cid 7, qid 0 00:31:01.697 [2024-05-15 20:22:54.134850] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.697 [2024-05-15 20:22:54.134856] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.697 [2024-05-15 20:22:54.134860] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134863] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7fc30): datao=0, datal=8192, cccid=5 00:31:01.697 [2024-05-15 20:22:54.134867] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1be8060) on tqpair(0x1b7fc30): expected_datao=0, payload_size=8192 00:31:01.697 [2024-05-15 20:22:54.134872] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134956] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134960] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134966] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.697 [2024-05-15 20:22:54.134972] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.697 [2024-05-15 20:22:54.134975] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134978] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7fc30): datao=0, datal=512, cccid=4 00:31:01.697 [2024-05-15 20:22:54.134983] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1be7f00) on tqpair(0x1b7fc30): expected_datao=0, payload_size=512 00:31:01.697 [2024-05-15 20:22:54.134987] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134993] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.134996] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.135002] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.697 [2024-05-15 20:22:54.135008] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.697 [2024-05-15 20:22:54.135011] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.135014] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7fc30): datao=0, datal=512, cccid=6 00:31:01.697 [2024-05-15 20:22:54.135018] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1be81c0) on tqpair(0x1b7fc30): expected_datao=0, payload_size=512 00:31:01.697 [2024-05-15 20:22:54.135023] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.135029] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.135032] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.135038] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:01.697 [2024-05-15 20:22:54.135043] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:01.697 [2024-05-15 20:22:54.135047] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.135050] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b7fc30): datao=0, datal=4096, cccid=7 00:31:01.697 [2024-05-15 20:22:54.135054] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1be8320) on tqpair(0x1b7fc30): expected_datao=0, payload_size=4096 00:31:01.697 [2024-05-15 20:22:54.135058] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.135067] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.135070] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.135175] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-05-15 20:22:54.135181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-05-15 20:22:54.135184] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.135188] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be8060) on tqpair=0x1b7fc30 00:31:01.697 [2024-05-15 20:22:54.135201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-05-15 20:22:54.135207] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-05-15 20:22:54.135210] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.135214] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7f00) on tqpair=0x1b7fc30 00:31:01.697 [2024-05-15 20:22:54.135223] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-05-15 20:22:54.135229] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-05-15 20:22:54.135232] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.135236] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be81c0) on tqpair=0x1b7fc30 00:31:01.697 [2024-05-15 20:22:54.135245] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.697 [2024-05-15 20:22:54.135251] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.697 [2024-05-15 20:22:54.135255] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.697 [2024-05-15 20:22:54.135258] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be8320) on tqpair=0x1b7fc30 00:31:01.697 ===================================================== 00:31:01.697 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.697 ===================================================== 00:31:01.697 Controller Capabilities/Features 00:31:01.697 ================================ 00:31:01.697 Vendor ID: 8086 00:31:01.697 Subsystem Vendor ID: 8086 00:31:01.697 Serial Number: SPDK00000000000001 00:31:01.697 Model Number: SPDK bdev Controller 00:31:01.697 Firmware Version: 24.05 00:31:01.697 Recommended Arb Burst: 6 00:31:01.697 IEEE OUI Identifier: e4 d2 5c 00:31:01.697 Multi-path I/O 00:31:01.697 May have multiple subsystem ports: Yes 00:31:01.697 May have multiple controllers: Yes 00:31:01.697 Associated with SR-IOV VF: No 00:31:01.697 Max Data Transfer Size: 131072 00:31:01.697 Max Number of Namespaces: 32 00:31:01.697 Max Number of I/O Queues: 127 00:31:01.698 NVMe Specification Version (VS): 1.3 00:31:01.698 NVMe Specification Version (Identify): 1.3 00:31:01.698 Maximum Queue Entries: 128 00:31:01.698 Contiguous Queues Required: Yes 00:31:01.698 Arbitration Mechanisms Supported 00:31:01.698 Weighted Round Robin: Not Supported 00:31:01.698 Vendor Specific: Not Supported 00:31:01.698 Reset Timeout: 15000 ms 00:31:01.698 Doorbell Stride: 4 bytes 00:31:01.698 NVM Subsystem Reset: Not Supported 00:31:01.698 Command Sets Supported 00:31:01.698 NVM Command Set: Supported 00:31:01.698 Boot Partition: Not Supported 00:31:01.698 Memory Page Size Minimum: 4096 bytes 00:31:01.698 Memory Page Size Maximum: 4096 bytes 00:31:01.698 Persistent Memory Region: Not Supported 00:31:01.698 Optional Asynchronous Events Supported 00:31:01.698 Namespace Attribute Notices: Supported 00:31:01.698 Firmware Activation Notices: Not Supported 00:31:01.698 ANA Change Notices: Not Supported 00:31:01.698 PLE Aggregate Log Change Notices: Not Supported 00:31:01.698 LBA Status Info Alert Notices: Not Supported 00:31:01.698 EGE Aggregate Log Change Notices: Not Supported 00:31:01.698 Normal NVM Subsystem Shutdown event: Not Supported 00:31:01.698 Zone Descriptor Change Notices: Not Supported 00:31:01.698 Discovery Log Change Notices: Not Supported 00:31:01.698 Controller Attributes 00:31:01.698 128-bit Host Identifier: Supported 00:31:01.698 Non-Operational Permissive Mode: Not Supported 00:31:01.698 NVM Sets: Not Supported 00:31:01.698 Read Recovery Levels: Not Supported 00:31:01.698 Endurance Groups: Not Supported 00:31:01.698 Predictable Latency Mode: Not Supported 00:31:01.698 Traffic Based Keep ALive: Not Supported 00:31:01.698 Namespace Granularity: Not Supported 00:31:01.698 SQ Associations: Not Supported 00:31:01.698 UUID List: Not Supported 00:31:01.698 Multi-Domain Subsystem: Not Supported 00:31:01.698 Fixed Capacity Management: Not Supported 00:31:01.698 Variable Capacity Management: Not Supported 00:31:01.698 Delete Endurance Group: Not Supported 00:31:01.698 Delete NVM Set: Not Supported 00:31:01.698 Extended LBA Formats Supported: Not Supported 00:31:01.698 Flexible Data Placement Supported: Not Supported 00:31:01.698 00:31:01.698 Controller Memory Buffer Support 00:31:01.698 ================================ 00:31:01.698 Supported: No 00:31:01.698 00:31:01.698 Persistent Memory Region Support 00:31:01.698 ================================ 00:31:01.698 Supported: No 00:31:01.698 00:31:01.698 Admin Command Set Attributes 00:31:01.698 ============================ 00:31:01.698 Security Send/Receive: Not Supported 00:31:01.698 Format NVM: Not Supported 00:31:01.698 Firmware Activate/Download: Not Supported 00:31:01.698 Namespace Management: Not Supported 00:31:01.698 Device Self-Test: Not Supported 00:31:01.698 Directives: Not Supported 00:31:01.698 NVMe-MI: Not Supported 00:31:01.698 Virtualization Management: Not Supported 00:31:01.698 Doorbell Buffer Config: Not Supported 00:31:01.698 Get LBA Status Capability: Not Supported 00:31:01.698 Command & Feature Lockdown Capability: Not Supported 00:31:01.698 Abort Command Limit: 4 00:31:01.698 Async Event Request Limit: 4 00:31:01.698 Number of Firmware Slots: N/A 00:31:01.698 Firmware Slot 1 Read-Only: N/A 00:31:01.698 Firmware Activation Without Reset: N/A 00:31:01.698 Multiple Update Detection Support: N/A 00:31:01.698 Firmware Update Granularity: No Information Provided 00:31:01.698 Per-Namespace SMART Log: No 00:31:01.698 Asymmetric Namespace Access Log Page: Not Supported 00:31:01.698 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:01.698 Command Effects Log Page: Supported 00:31:01.698 Get Log Page Extended Data: Supported 00:31:01.698 Telemetry Log Pages: Not Supported 00:31:01.698 Persistent Event Log Pages: Not Supported 00:31:01.698 Supported Log Pages Log Page: May Support 00:31:01.698 Commands Supported & Effects Log Page: Not Supported 00:31:01.698 Feature Identifiers & Effects Log Page:May Support 00:31:01.698 NVMe-MI Commands & Effects Log Page: May Support 00:31:01.698 Data Area 4 for Telemetry Log: Not Supported 00:31:01.698 Error Log Page Entries Supported: 128 00:31:01.698 Keep Alive: Supported 00:31:01.698 Keep Alive Granularity: 10000 ms 00:31:01.698 00:31:01.698 NVM Command Set Attributes 00:31:01.698 ========================== 00:31:01.698 Submission Queue Entry Size 00:31:01.698 Max: 64 00:31:01.698 Min: 64 00:31:01.698 Completion Queue Entry Size 00:31:01.698 Max: 16 00:31:01.698 Min: 16 00:31:01.698 Number of Namespaces: 32 00:31:01.698 Compare Command: Supported 00:31:01.698 Write Uncorrectable Command: Not Supported 00:31:01.698 Dataset Management Command: Supported 00:31:01.698 Write Zeroes Command: Supported 00:31:01.698 Set Features Save Field: Not Supported 00:31:01.698 Reservations: Supported 00:31:01.698 Timestamp: Not Supported 00:31:01.698 Copy: Supported 00:31:01.698 Volatile Write Cache: Present 00:31:01.698 Atomic Write Unit (Normal): 1 00:31:01.698 Atomic Write Unit (PFail): 1 00:31:01.698 Atomic Compare & Write Unit: 1 00:31:01.698 Fused Compare & Write: Supported 00:31:01.698 Scatter-Gather List 00:31:01.698 SGL Command Set: Supported 00:31:01.698 SGL Keyed: Supported 00:31:01.698 SGL Bit Bucket Descriptor: Not Supported 00:31:01.698 SGL Metadata Pointer: Not Supported 00:31:01.698 Oversized SGL: Not Supported 00:31:01.698 SGL Metadata Address: Not Supported 00:31:01.698 SGL Offset: Supported 00:31:01.698 Transport SGL Data Block: Not Supported 00:31:01.698 Replay Protected Memory Block: Not Supported 00:31:01.698 00:31:01.698 Firmware Slot Information 00:31:01.698 ========================= 00:31:01.698 Active slot: 1 00:31:01.698 Slot 1 Firmware Revision: 24.05 00:31:01.698 00:31:01.698 00:31:01.698 Commands Supported and Effects 00:31:01.698 ============================== 00:31:01.698 Admin Commands 00:31:01.698 -------------- 00:31:01.698 Get Log Page (02h): Supported 00:31:01.698 Identify (06h): Supported 00:31:01.698 Abort (08h): Supported 00:31:01.698 Set Features (09h): Supported 00:31:01.698 Get Features (0Ah): Supported 00:31:01.698 Asynchronous Event Request (0Ch): Supported 00:31:01.698 Keep Alive (18h): Supported 00:31:01.698 I/O Commands 00:31:01.698 ------------ 00:31:01.698 Flush (00h): Supported LBA-Change 00:31:01.698 Write (01h): Supported LBA-Change 00:31:01.698 Read (02h): Supported 00:31:01.698 Compare (05h): Supported 00:31:01.698 Write Zeroes (08h): Supported LBA-Change 00:31:01.698 Dataset Management (09h): Supported LBA-Change 00:31:01.698 Copy (19h): Supported LBA-Change 00:31:01.698 Unknown (79h): Supported LBA-Change 00:31:01.698 Unknown (7Ah): Supported 00:31:01.698 00:31:01.698 Error Log 00:31:01.698 ========= 00:31:01.698 00:31:01.698 Arbitration 00:31:01.698 =========== 00:31:01.698 Arbitration Burst: 1 00:31:01.698 00:31:01.698 Power Management 00:31:01.698 ================ 00:31:01.698 Number of Power States: 1 00:31:01.698 Current Power State: Power State #0 00:31:01.698 Power State #0: 00:31:01.698 Max Power: 0.00 W 00:31:01.698 Non-Operational State: Operational 00:31:01.698 Entry Latency: Not Reported 00:31:01.698 Exit Latency: Not Reported 00:31:01.698 Relative Read Throughput: 0 00:31:01.698 Relative Read Latency: 0 00:31:01.698 Relative Write Throughput: 0 00:31:01.698 Relative Write Latency: 0 00:31:01.698 Idle Power: Not Reported 00:31:01.698 Active Power: Not Reported 00:31:01.698 Non-Operational Permissive Mode: Not Supported 00:31:01.698 00:31:01.698 Health Information 00:31:01.698 ================== 00:31:01.698 Critical Warnings: 00:31:01.698 Available Spare Space: OK 00:31:01.698 Temperature: OK 00:31:01.698 Device Reliability: OK 00:31:01.698 Read Only: No 00:31:01.698 Volatile Memory Backup: OK 00:31:01.698 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:01.698 Temperature Threshold: [2024-05-15 20:22:54.139365] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.698 [2024-05-15 20:22:54.139372] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b7fc30) 00:31:01.698 [2024-05-15 20:22:54.139379] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.698 [2024-05-15 20:22:54.139393] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be8320, cid 7, qid 0 00:31:01.698 [2024-05-15 20:22:54.139564] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.698 [2024-05-15 20:22:54.139571] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.698 [2024-05-15 20:22:54.139575] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.698 [2024-05-15 20:22:54.139579] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be8320) on tqpair=0x1b7fc30 00:31:01.698 [2024-05-15 20:22:54.139607] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:31:01.698 [2024-05-15 20:22:54.139619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.698 [2024-05-15 20:22:54.139626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.698 [2024-05-15 20:22:54.139632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.698 [2024-05-15 20:22:54.139638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.698 [2024-05-15 20:22:54.139646] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.698 [2024-05-15 20:22:54.139649] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.139653] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.699 [2024-05-15 20:22:54.139660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.699 [2024-05-15 20:22:54.139672] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.699 [2024-05-15 20:22:54.139799] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.699 [2024-05-15 20:22:54.139805] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.699 [2024-05-15 20:22:54.139809] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.139813] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.699 [2024-05-15 20:22:54.139820] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.139824] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.139827] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.699 [2024-05-15 20:22:54.139834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.699 [2024-05-15 20:22:54.139846] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.699 [2024-05-15 20:22:54.140020] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.699 [2024-05-15 20:22:54.140026] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.699 [2024-05-15 20:22:54.140029] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.140033] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.699 [2024-05-15 20:22:54.140038] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:31:01.699 [2024-05-15 20:22:54.140043] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:31:01.699 [2024-05-15 20:22:54.140052] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.140056] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.140059] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.699 [2024-05-15 20:22:54.140066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.699 [2024-05-15 20:22:54.140076] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.699 [2024-05-15 20:22:54.140238] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.699 [2024-05-15 20:22:54.140244] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.699 [2024-05-15 20:22:54.140247] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.140251] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.699 [2024-05-15 20:22:54.140262] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.140266] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.140269] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.699 [2024-05-15 20:22:54.140276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.699 [2024-05-15 20:22:54.140285] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.699 [2024-05-15 20:22:54.140455] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.699 [2024-05-15 20:22:54.140462] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.699 [2024-05-15 20:22:54.140466] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.140469] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.699 [2024-05-15 20:22:54.140480] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.140484] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.140487] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.699 [2024-05-15 20:22:54.140494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.699 [2024-05-15 20:22:54.140506] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.699 [2024-05-15 20:22:54.140670] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.699 [2024-05-15 20:22:54.140676] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.699 [2024-05-15 20:22:54.140680] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.140683] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.699 [2024-05-15 20:22:54.140694] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.140698] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.140701] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.699 [2024-05-15 20:22:54.140708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.699 [2024-05-15 20:22:54.140717] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.699 [2024-05-15 20:22:54.140887] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.699 [2024-05-15 20:22:54.140893] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.699 [2024-05-15 20:22:54.140896] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.140900] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.699 [2024-05-15 20:22:54.140910] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.140914] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.140918] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.699 [2024-05-15 20:22:54.140924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.699 [2024-05-15 20:22:54.140934] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.699 [2024-05-15 20:22:54.141110] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.699 [2024-05-15 20:22:54.141116] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.699 [2024-05-15 20:22:54.141119] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.141123] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.699 [2024-05-15 20:22:54.141133] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.141137] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.141141] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.699 [2024-05-15 20:22:54.141147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.699 [2024-05-15 20:22:54.141157] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.699 [2024-05-15 20:22:54.141378] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.699 [2024-05-15 20:22:54.141385] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.699 [2024-05-15 20:22:54.141389] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.141392] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.699 [2024-05-15 20:22:54.141403] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.141407] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.141410] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.699 [2024-05-15 20:22:54.141417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.699 [2024-05-15 20:22:54.141429] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.699 [2024-05-15 20:22:54.141631] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.699 [2024-05-15 20:22:54.141638] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.699 [2024-05-15 20:22:54.141641] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.141645] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.699 [2024-05-15 20:22:54.141655] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.141659] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.141663] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.699 [2024-05-15 20:22:54.141669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.699 [2024-05-15 20:22:54.141679] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.699 [2024-05-15 20:22:54.141878] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.699 [2024-05-15 20:22:54.141885] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.699 [2024-05-15 20:22:54.141888] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.141892] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.699 [2024-05-15 20:22:54.141902] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.141906] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.141909] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.699 [2024-05-15 20:22:54.141916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.699 [2024-05-15 20:22:54.141926] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.699 [2024-05-15 20:22:54.142122] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.699 [2024-05-15 20:22:54.142129] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.699 [2024-05-15 20:22:54.142132] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.142136] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.699 [2024-05-15 20:22:54.142147] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.142151] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.142154] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.699 [2024-05-15 20:22:54.142161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.699 [2024-05-15 20:22:54.142170] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.699 [2024-05-15 20:22:54.142404] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.699 [2024-05-15 20:22:54.142410] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.699 [2024-05-15 20:22:54.142414] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.142417] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.699 [2024-05-15 20:22:54.142428] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.699 [2024-05-15 20:22:54.142432] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.700 [2024-05-15 20:22:54.142435] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.700 [2024-05-15 20:22:54.142442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.700 [2024-05-15 20:22:54.142452] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.700 [2024-05-15 20:22:54.142669] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.700 [2024-05-15 20:22:54.142675] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.700 [2024-05-15 20:22:54.142679] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.700 [2024-05-15 20:22:54.142683] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.700 [2024-05-15 20:22:54.142693] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.700 [2024-05-15 20:22:54.142697] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.700 [2024-05-15 20:22:54.142700] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.700 [2024-05-15 20:22:54.142707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.700 [2024-05-15 20:22:54.142716] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.700 [2024-05-15 20:22:54.142998] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.700 [2024-05-15 20:22:54.143004] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.700 [2024-05-15 20:22:54.143008] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.700 [2024-05-15 20:22:54.143011] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.700 [2024-05-15 20:22:54.143022] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.700 [2024-05-15 20:22:54.143026] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.700 [2024-05-15 20:22:54.143029] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.700 [2024-05-15 20:22:54.143036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.700 [2024-05-15 20:22:54.143045] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.700 [2024-05-15 20:22:54.143152] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.700 [2024-05-15 20:22:54.143159] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.700 [2024-05-15 20:22:54.143162] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.700 [2024-05-15 20:22:54.143166] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.700 [2024-05-15 20:22:54.143176] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:01.700 [2024-05-15 20:22:54.143180] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:01.700 [2024-05-15 20:22:54.143183] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b7fc30) 00:31:01.700 [2024-05-15 20:22:54.143190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.700 [2024-05-15 20:22:54.143200] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1be7da0, cid 3, qid 0 00:31:01.700 [2024-05-15 20:22:54.147320] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:01.700 [2024-05-15 20:22:54.147328] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:01.700 [2024-05-15 20:22:54.147332] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:01.700 [2024-05-15 20:22:54.147336] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1be7da0) on tqpair=0x1b7fc30 00:31:01.700 [2024-05-15 20:22:54.147344] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:31:01.700 0 Kelvin (-273 Celsius) 00:31:01.700 Available Spare: 0% 00:31:01.700 Available Spare Threshold: 0% 00:31:01.700 Life Percentage Used: 0% 00:31:01.700 Data Units Read: 0 00:31:01.700 Data Units Written: 0 00:31:01.700 Host Read Commands: 0 00:31:01.700 Host Write Commands: 0 00:31:01.700 Controller Busy Time: 0 minutes 00:31:01.700 Power Cycles: 0 00:31:01.700 Power On Hours: 0 hours 00:31:01.700 Unsafe Shutdowns: 0 00:31:01.700 Unrecoverable Media Errors: 0 00:31:01.700 Lifetime Error Log Entries: 0 00:31:01.700 Warning Temperature Time: 0 minutes 00:31:01.700 Critical Temperature Time: 0 minutes 00:31:01.700 00:31:01.700 Number of Queues 00:31:01.700 ================ 00:31:01.700 Number of I/O Submission Queues: 127 00:31:01.700 Number of I/O Completion Queues: 127 00:31:01.700 00:31:01.700 Active Namespaces 00:31:01.700 ================= 00:31:01.700 Namespace ID:1 00:31:01.700 Error Recovery Timeout: Unlimited 00:31:01.700 Command Set Identifier: NVM (00h) 00:31:01.700 Deallocate: Supported 00:31:01.700 Deallocated/Unwritten Error: Not Supported 00:31:01.700 Deallocated Read Value: Unknown 00:31:01.700 Deallocate in Write Zeroes: Not Supported 00:31:01.700 Deallocated Guard Field: 0xFFFF 00:31:01.700 Flush: Supported 00:31:01.700 Reservation: Supported 00:31:01.700 Namespace Sharing Capabilities: Multiple Controllers 00:31:01.700 Size (in LBAs): 131072 (0GiB) 00:31:01.700 Capacity (in LBAs): 131072 (0GiB) 00:31:01.700 Utilization (in LBAs): 131072 (0GiB) 00:31:01.700 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:01.700 EUI64: ABCDEF0123456789 00:31:01.700 UUID: 1069e988-a77a-4589-93e5-1f4bf0b7c949 00:31:01.700 Thin Provisioning: Not Supported 00:31:01.700 Per-NS Atomic Units: Yes 00:31:01.700 Atomic Boundary Size (Normal): 0 00:31:01.700 Atomic Boundary Size (PFail): 0 00:31:01.700 Atomic Boundary Offset: 0 00:31:01.700 Maximum Single Source Range Length: 65535 00:31:01.700 Maximum Copy Length: 65535 00:31:01.700 Maximum Source Range Count: 1 00:31:01.700 NGUID/EUI64 Never Reused: No 00:31:01.700 Namespace Write Protected: No 00:31:01.700 Number of LBA Formats: 1 00:31:01.700 Current LBA Format: LBA Format #00 00:31:01.700 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:01.700 00:31:01.700 20:22:54 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:01.700 20:22:54 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:01.700 20:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.700 20:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:01.700 20:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.700 20:22:54 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:01.700 20:22:54 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:01.700 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:01.700 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:31:01.700 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:01.700 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:31:01.700 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:01.700 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:01.700 rmmod nvme_tcp 00:31:01.961 rmmod nvme_fabrics 00:31:01.961 rmmod nvme_keyring 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 209974 ']' 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 209974 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 209974 ']' 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 209974 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 209974 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 209974' 00:31:01.961 killing process with pid 209974 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 209974 00:31:01.961 [2024-05-15 20:22:54.309840] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 209974 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:01.961 20:22:54 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.510 20:22:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:04.510 00:31:04.510 real 0m12.034s 00:31:04.510 user 0m8.573s 00:31:04.510 sys 0m6.402s 00:31:04.510 20:22:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:04.510 20:22:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:04.510 ************************************ 00:31:04.510 END TEST nvmf_identify 00:31:04.510 ************************************ 00:31:04.510 20:22:56 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:04.510 20:22:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:04.510 20:22:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:04.510 20:22:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:04.510 ************************************ 00:31:04.510 START TEST nvmf_perf 00:31:04.510 ************************************ 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:04.510 * Looking for test storage... 00:31:04.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:31:04.510 20:22:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.657 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:12.658 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:12.658 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:12.658 Found net devices under 0000:31:00.0: cvl_0_0 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:12.658 Found net devices under 0000:31:00.1: cvl_0_1 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:12.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:31:12.658 00:31:12.658 --- 10.0.0.2 ping statistics --- 00:31:12.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.658 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:31:12.658 00:31:12.658 --- 10.0.0.1 ping statistics --- 00:31:12.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.658 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:12.658 20:23:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:12.658 20:23:05 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:12.658 20:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:12.658 20:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:12.658 20:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:12.658 20:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=215002 00:31:12.658 20:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 215002 00:31:12.658 20:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:12.658 20:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 215002 ']' 00:31:12.658 20:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.658 20:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:12.658 20:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.658 20:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:12.658 20:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:12.658 [2024-05-15 20:23:05.094594] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:31:12.658 [2024-05-15 20:23:05.094646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:12.658 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.919 [2024-05-15 20:23:05.182872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:12.919 [2024-05-15 20:23:05.259374] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:12.919 [2024-05-15 20:23:05.259422] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:12.919 [2024-05-15 20:23:05.259430] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:12.919 [2024-05-15 20:23:05.259436] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:12.919 [2024-05-15 20:23:05.259442] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:12.919 [2024-05-15 20:23:05.259562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.919 [2024-05-15 20:23:05.259687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:12.919 [2024-05-15 20:23:05.259849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.919 [2024-05-15 20:23:05.259850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:13.490 20:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:13.490 20:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:31:13.490 20:23:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:13.490 20:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:13.490 20:23:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:13.749 20:23:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:13.749 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:13.749 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:14.321 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:14.321 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:14.321 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:31:14.321 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:14.582 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:14.582 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:31:14.582 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:14.582 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:14.582 20:23:06 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:14.848 [2024-05-15 20:23:07.182889] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.848 20:23:07 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:15.108 20:23:07 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:15.108 20:23:07 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:15.368 20:23:07 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:15.368 20:23:07 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:15.631 20:23:07 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.631 [2024-05-15 20:23:08.061897] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:15.632 [2024-05-15 20:23:08.062143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.632 20:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:15.942 20:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:31:15.942 20:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:15.942 20:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:15.942 20:23:08 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:17.328 Initializing NVMe Controllers 00:31:17.328 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:31:17.328 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:31:17.328 Initialization complete. Launching workers. 00:31:17.328 ======================================================== 00:31:17.328 Latency(us) 00:31:17.328 Device Information : IOPS MiB/s Average min max 00:31:17.328 PCIE (0000:65:00.0) NSID 1 from core 0: 79440.86 310.32 402.26 13.32 5206.43 00:31:17.328 ======================================================== 00:31:17.328 Total : 79440.86 310.32 402.26 13.32 5206.43 00:31:17.328 00:31:17.328 20:23:09 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:17.328 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.708 Initializing NVMe Controllers 00:31:18.708 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:18.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:18.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:18.708 Initialization complete. Launching workers. 00:31:18.708 ======================================================== 00:31:18.708 Latency(us) 00:31:18.708 Device Information : IOPS MiB/s Average min max 00:31:18.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 105.41 0.41 9637.42 417.01 45067.71 00:31:18.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 59.66 0.23 16760.13 7947.94 51879.80 00:31:18.708 ======================================================== 00:31:18.708 Total : 165.07 0.64 12211.89 417.01 51879.80 00:31:18.708 00:31:18.708 20:23:10 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:18.708 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.653 Initializing NVMe Controllers 00:31:19.653 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:19.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:19.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:19.653 Initialization complete. Launching workers. 00:31:19.653 ======================================================== 00:31:19.653 Latency(us) 00:31:19.653 Device Information : IOPS MiB/s Average min max 00:31:19.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9054.99 35.37 3534.04 487.38 7268.53 00:31:19.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3854.00 15.05 8347.53 6937.02 15937.25 00:31:19.653 ======================================================== 00:31:19.653 Total : 12908.99 50.43 4971.11 487.38 15937.25 00:31:19.653 00:31:19.653 20:23:12 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:19.653 20:23:12 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:19.653 20:23:12 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:19.914 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.456 Initializing NVMe Controllers 00:31:22.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:22.457 Controller IO queue size 128, less than required. 00:31:22.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:22.457 Controller IO queue size 128, less than required. 00:31:22.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:22.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:22.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:22.457 Initialization complete. Launching workers. 00:31:22.457 ======================================================== 00:31:22.457 Latency(us) 00:31:22.457 Device Information : IOPS MiB/s Average min max 00:31:22.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 952.82 238.21 137814.77 71488.07 225145.22 00:31:22.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 580.09 145.02 227822.21 86115.95 336234.97 00:31:22.457 ======================================================== 00:31:22.457 Total : 1532.91 383.23 171875.61 71488.07 336234.97 00:31:22.457 00:31:22.457 20:23:14 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:22.457 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.457 No valid NVMe controllers or AIO or URING devices found 00:31:22.457 Initializing NVMe Controllers 00:31:22.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:22.457 Controller IO queue size 128, less than required. 00:31:22.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:22.457 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:22.457 Controller IO queue size 128, less than required. 00:31:22.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:22.457 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:22.457 WARNING: Some requested NVMe devices were skipped 00:31:22.457 20:23:14 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:22.457 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.001 Initializing NVMe Controllers 00:31:25.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:25.001 Controller IO queue size 128, less than required. 00:31:25.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:25.001 Controller IO queue size 128, less than required. 00:31:25.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:25.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:25.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:25.001 Initialization complete. Launching workers. 00:31:25.001 00:31:25.001 ==================== 00:31:25.001 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:25.001 TCP transport: 00:31:25.001 polls: 35684 00:31:25.001 idle_polls: 11991 00:31:25.001 sock_completions: 23693 00:31:25.001 nvme_completions: 4133 00:31:25.001 submitted_requests: 6202 00:31:25.001 queued_requests: 1 00:31:25.001 00:31:25.001 ==================== 00:31:25.001 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:25.001 TCP transport: 00:31:25.001 polls: 39211 00:31:25.001 idle_polls: 15326 00:31:25.001 sock_completions: 23885 00:31:25.001 nvme_completions: 3979 00:31:25.001 submitted_requests: 5984 00:31:25.001 queued_requests: 1 00:31:25.001 ======================================================== 00:31:25.001 Latency(us) 00:31:25.001 Device Information : IOPS MiB/s Average min max 00:31:25.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1032.99 258.25 127280.39 66986.76 193375.84 00:31:25.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 994.49 248.62 132051.29 56424.15 197510.11 00:31:25.001 ======================================================== 00:31:25.001 Total : 2027.48 506.87 129620.54 56424.15 197510.11 00:31:25.001 00:31:25.001 20:23:17 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:25.001 20:23:17 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:25.001 20:23:17 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:25.001 20:23:17 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:31:25.262 20:23:17 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:26.207 20:23:18 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=9ce0d5ba-18a2-4782-b590-db59c5e52439 00:31:26.207 20:23:18 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 9ce0d5ba-18a2-4782-b590-db59c5e52439 00:31:26.207 20:23:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=9ce0d5ba-18a2-4782-b590-db59c5e52439 00:31:26.207 20:23:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:31:26.207 20:23:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:31:26.207 20:23:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:31:26.207 20:23:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:26.467 20:23:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:31:26.467 { 00:31:26.467 "uuid": "9ce0d5ba-18a2-4782-b590-db59c5e52439", 00:31:26.467 "name": "lvs_0", 00:31:26.467 "base_bdev": "Nvme0n1", 00:31:26.467 "total_data_clusters": 457407, 00:31:26.467 "free_clusters": 457407, 00:31:26.467 "block_size": 512, 00:31:26.468 "cluster_size": 4194304 00:31:26.468 } 00:31:26.468 ]' 00:31:26.468 20:23:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="9ce0d5ba-18a2-4782-b590-db59c5e52439") .free_clusters' 00:31:26.468 20:23:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=457407 00:31:26.468 20:23:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="9ce0d5ba-18a2-4782-b590-db59c5e52439") .cluster_size' 00:31:26.468 20:23:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:31:26.468 20:23:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=1829628 00:31:26.468 20:23:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 1829628 00:31:26.468 1829628 00:31:26.468 20:23:18 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:31:26.468 20:23:18 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:26.468 20:23:18 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9ce0d5ba-18a2-4782-b590-db59c5e52439 lbd_0 20480 00:31:26.728 20:23:19 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=334a6f8c-3f95-47f4-a626-68f2c8bc6802 00:31:26.728 20:23:19 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 334a6f8c-3f95-47f4-a626-68f2c8bc6802 lvs_n_0 00:31:28.644 20:23:20 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=920956a9-5bc6-484a-9b52-e99b2bb784df 00:31:28.644 20:23:20 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 920956a9-5bc6-484a-9b52-e99b2bb784df 00:31:28.644 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=920956a9-5bc6-484a-9b52-e99b2bb784df 00:31:28.644 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:31:28.644 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:31:28.644 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:31:28.644 20:23:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:28.644 20:23:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:31:28.644 { 00:31:28.644 "uuid": "9ce0d5ba-18a2-4782-b590-db59c5e52439", 00:31:28.644 "name": "lvs_0", 00:31:28.644 "base_bdev": "Nvme0n1", 00:31:28.644 "total_data_clusters": 457407, 00:31:28.644 "free_clusters": 452287, 00:31:28.644 "block_size": 512, 00:31:28.644 "cluster_size": 4194304 00:31:28.644 }, 00:31:28.644 { 00:31:28.644 "uuid": "920956a9-5bc6-484a-9b52-e99b2bb784df", 00:31:28.644 "name": "lvs_n_0", 00:31:28.644 "base_bdev": "334a6f8c-3f95-47f4-a626-68f2c8bc6802", 00:31:28.644 "total_data_clusters": 5114, 00:31:28.644 "free_clusters": 5114, 00:31:28.644 "block_size": 512, 00:31:28.644 "cluster_size": 4194304 00:31:28.644 } 00:31:28.644 ]' 00:31:28.644 20:23:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="920956a9-5bc6-484a-9b52-e99b2bb784df") .free_clusters' 00:31:28.644 20:23:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:31:28.644 20:23:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="920956a9-5bc6-484a-9b52-e99b2bb784df") .cluster_size' 00:31:28.644 20:23:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:31:28.644 20:23:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:31:28.644 20:23:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:31:28.644 20456 00:31:28.644 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:28.644 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 920956a9-5bc6-484a-9b52-e99b2bb784df lbd_nest_0 20456 00:31:28.905 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=61e9b93b-450d-4ba6-bd15-5a18ea8216dc 00:31:28.905 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:29.166 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:29.166 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 61e9b93b-450d-4ba6-bd15-5a18ea8216dc 00:31:29.427 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:29.688 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:29.688 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:29.688 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:29.688 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:29.688 20:23:21 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:29.688 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.932 Initializing NVMe Controllers 00:31:41.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:41.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:41.932 Initialization complete. Launching workers. 00:31:41.932 ======================================================== 00:31:41.932 Latency(us) 00:31:41.932 Device Information : IOPS MiB/s Average min max 00:31:41.932 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.90 0.02 20064.86 335.76 45250.11 00:31:41.932 ======================================================== 00:31:41.932 Total : 49.90 0.02 20064.86 335.76 45250.11 00:31:41.932 00:31:41.932 20:23:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:41.932 20:23:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:41.932 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.930 Initializing NVMe Controllers 00:31:51.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:51.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:51.930 Initialization complete. Launching workers. 00:31:51.930 ======================================================== 00:31:51.930 Latency(us) 00:31:51.930 Device Information : IOPS MiB/s Average min max 00:31:51.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.80 10.10 12385.12 5022.73 23909.78 00:31:51.930 ======================================================== 00:31:51.930 Total : 80.80 10.10 12385.12 5022.73 23909.78 00:31:51.930 00:31:51.930 20:23:42 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:51.930 20:23:42 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:51.930 20:23:42 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:51.930 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.945 Initializing NVMe Controllers 00:32:01.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:01.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:01.945 Initialization complete. Launching workers. 00:32:01.945 ======================================================== 00:32:01.945 Latency(us) 00:32:01.945 Device Information : IOPS MiB/s Average min max 00:32:01.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6683.45 3.26 4787.56 308.48 11949.33 00:32:01.945 ======================================================== 00:32:01.945 Total : 6683.45 3.26 4787.56 308.48 11949.33 00:32:01.945 00:32:01.945 20:23:52 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:01.945 20:23:52 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:01.945 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.940 Initializing NVMe Controllers 00:32:11.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:11.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:11.940 Initialization complete. Launching workers. 00:32:11.940 ======================================================== 00:32:11.940 Latency(us) 00:32:11.940 Device Information : IOPS MiB/s Average min max 00:32:11.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1985.50 248.19 16138.80 1129.47 36310.15 00:32:11.940 ======================================================== 00:32:11.940 Total : 1985.50 248.19 16138.80 1129.47 36310.15 00:32:11.940 00:32:11.940 20:24:03 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:11.940 20:24:03 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:11.940 20:24:03 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:11.940 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.181 Initializing NVMe Controllers 00:32:22.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:22.181 Controller IO queue size 128, less than required. 00:32:22.181 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:22.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:22.181 Initialization complete. Launching workers. 00:32:22.181 ======================================================== 00:32:22.181 Latency(us) 00:32:22.181 Device Information : IOPS MiB/s Average min max 00:32:22.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11523.40 5.63 11115.97 1695.10 22368.21 00:32:22.181 ======================================================== 00:32:22.181 Total : 11523.40 5.63 11115.97 1695.10 22368.21 00:32:22.181 00:32:22.181 20:24:13 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:22.181 20:24:13 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:22.181 EAL: No free 2048 kB hugepages reported on node 1 00:32:32.172 Initializing NVMe Controllers 00:32:32.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:32.172 Controller IO queue size 128, less than required. 00:32:32.172 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:32.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:32.173 Initialization complete. Launching workers. 00:32:32.173 ======================================================== 00:32:32.173 Latency(us) 00:32:32.173 Device Information : IOPS MiB/s Average min max 00:32:32.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1158.20 144.77 110730.80 22769.81 251267.41 00:32:32.173 ======================================================== 00:32:32.173 Total : 1158.20 144.77 110730.80 22769.81 251267.41 00:32:32.173 00:32:32.173 20:24:24 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:32.173 20:24:24 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 61e9b93b-450d-4ba6-bd15-5a18ea8216dc 00:32:33.556 20:24:25 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:33.817 20:24:26 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 334a6f8c-3f95-47f4-a626-68f2c8bc6802 00:32:34.077 20:24:26 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:34.077 20:24:26 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:34.077 20:24:26 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:34.077 20:24:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:34.077 20:24:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:32:34.077 20:24:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:34.077 20:24:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:32:34.077 20:24:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:34.077 20:24:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:34.077 rmmod nvme_tcp 00:32:34.077 rmmod nvme_fabrics 00:32:34.337 rmmod nvme_keyring 00:32:34.337 20:24:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:34.337 20:24:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:32:34.337 20:24:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:32:34.337 20:24:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 215002 ']' 00:32:34.337 20:24:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 215002 00:32:34.337 20:24:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 215002 ']' 00:32:34.337 20:24:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 215002 00:32:34.337 20:24:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:32:34.337 20:24:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:34.337 20:24:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 215002 00:32:34.337 20:24:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:34.337 20:24:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:34.337 20:24:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 215002' 00:32:34.337 killing process with pid 215002 00:32:34.337 20:24:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 215002 00:32:34.337 [2024-05-15 20:24:26.666039] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:34.337 20:24:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 215002 00:32:36.248 20:24:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:36.248 20:24:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:36.248 20:24:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:36.248 20:24:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:36.248 20:24:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:36.248 20:24:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.248 20:24:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:36.248 20:24:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.791 20:24:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:38.791 00:32:38.791 real 1m34.095s 00:32:38.791 user 5m31.810s 00:32:38.791 sys 0m14.503s 00:32:38.791 20:24:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:38.791 20:24:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:38.791 ************************************ 00:32:38.791 END TEST nvmf_perf 00:32:38.791 ************************************ 00:32:38.791 20:24:30 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:38.791 20:24:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:38.791 20:24:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:38.791 20:24:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:38.791 ************************************ 00:32:38.791 START TEST nvmf_fio_host 00:32:38.791 ************************************ 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:38.791 * Looking for test storage... 00:32:38.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:38.791 20:24:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.927 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.927 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:46.927 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:46.927 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:46.927 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:46.928 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:46.928 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:46.928 Found net devices under 0000:31:00.0: cvl_0_0 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:46.928 Found net devices under 0000:31:00.1: cvl_0_1 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:46.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:32:46.928 00:32:46.928 --- 10.0.0.2 ping statistics --- 00:32:46.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.928 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.465 ms 00:32:46.928 00:32:46.928 --- 10.0.0.1 ping statistics --- 00:32:46.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.928 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=236045 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 236045 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 236045 ']' 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:46.928 20:24:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.928 [2024-05-15 20:24:38.816127] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:32:46.928 [2024-05-15 20:24:38.816173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:46.928 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.928 [2024-05-15 20:24:38.905428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:46.929 [2024-05-15 20:24:38.975415] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.929 [2024-05-15 20:24:38.975449] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.929 [2024-05-15 20:24:38.975457] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.929 [2024-05-15 20:24:38.975463] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.929 [2024-05-15 20:24:38.975468] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.929 [2024-05-15 20:24:38.975619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.929 [2024-05-15 20:24:38.975733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:46.929 [2024-05-15 20:24:38.975888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.929 [2024-05-15 20:24:38.975889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:47.189 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:47.189 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:32:47.189 20:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:47.189 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.189 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.189 [2024-05-15 20:24:39.685014] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.450 Malloc1 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.450 [2024-05-15 20:24:39.784294] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:47.450 [2024-05-15 20:24:39.784507] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:47.450 20:24:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:47.711 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:47.711 fio-3.35 00:32:47.711 Starting 1 thread 00:32:47.711 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.254 00:32:50.254 test: (groupid=0, jobs=1): err= 0: pid=236569: Wed May 15 20:24:42 2024 00:32:50.254 read: IOPS=9762, BW=38.1MiB/s (40.0MB/s)(76.5MiB/2006msec) 00:32:50.254 slat (usec): min=2, max=273, avg= 2.24, stdev= 2.73 00:32:50.254 clat (usec): min=3750, max=12268, avg=7247.90, stdev=527.25 00:32:50.254 lat (usec): min=3782, max=12270, avg=7250.14, stdev=527.18 00:32:50.254 clat percentiles (usec): 00:32:50.254 | 1.00th=[ 6063], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6849], 00:32:50.254 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:32:50.254 | 70.00th=[ 7504], 80.00th=[ 7635], 90.00th=[ 7832], 95.00th=[ 8029], 00:32:50.254 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[10814], 99.95th=[11469], 00:32:50.254 | 99.99th=[12256] 00:32:50.254 bw ( KiB/s): min=38296, max=39520, per=99.96%, avg=39032.00, stdev=556.68, samples=4 00:32:50.254 iops : min= 9574, max= 9880, avg=9758.00, stdev=139.17, samples=4 00:32:50.254 write: IOPS=9773, BW=38.2MiB/s (40.0MB/s)(76.6MiB/2006msec); 0 zone resets 00:32:50.254 slat (usec): min=2, max=276, avg= 2.32, stdev= 2.17 00:32:50.254 clat (usec): min=2902, max=11264, avg=5802.26, stdev=451.38 00:32:50.254 lat (usec): min=2919, max=11266, avg=5804.59, stdev=451.35 00:32:50.254 clat percentiles (usec): 00:32:50.254 | 1.00th=[ 4752], 5.00th=[ 5145], 10.00th=[ 5276], 20.00th=[ 5473], 00:32:50.254 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5932], 00:32:50.255 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6456], 00:32:50.255 | 99.00th=[ 6783], 99.50th=[ 6915], 99.90th=[ 9634], 99.95th=[10683], 00:32:50.255 | 99.99th=[11207] 00:32:50.255 bw ( KiB/s): min=38864, max=39552, per=99.99%, avg=39092.00, stdev=315.54, samples=4 00:32:50.255 iops : min= 9716, max= 9888, avg=9773.00, stdev=78.88, samples=4 00:32:50.255 lat (msec) : 4=0.06%, 10=99.82%, 20=0.12% 00:32:50.255 cpu : usr=68.33%, sys=27.78%, ctx=56, majf=0, minf=5 00:32:50.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:50.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:50.255 issued rwts: total=19583,19606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:50.255 00:32:50.255 Run status group 0 (all jobs): 00:32:50.255 READ: bw=38.1MiB/s (40.0MB/s), 38.1MiB/s-38.1MiB/s (40.0MB/s-40.0MB/s), io=76.5MiB (80.2MB), run=2006-2006msec 00:32:50.255 WRITE: bw=38.2MiB/s (40.0MB/s), 38.2MiB/s-38.2MiB/s (40.0MB/s-40.0MB/s), io=76.6MiB (80.3MB), run=2006-2006msec 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:50.255 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:50.533 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:50.533 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:50.533 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:50.533 20:24:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:50.791 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:50.791 fio-3.35 00:32:50.791 Starting 1 thread 00:32:50.791 EAL: No free 2048 kB hugepages reported on node 1 00:32:53.331 00:32:53.331 test: (groupid=0, jobs=1): err= 0: pid=237109: Wed May 15 20:24:45 2024 00:32:53.331 read: IOPS=8644, BW=135MiB/s (142MB/s)(271MiB/2005msec) 00:32:53.331 slat (usec): min=3, max=108, avg= 3.68, stdev= 1.44 00:32:53.331 clat (usec): min=3096, max=55215, avg=9116.47, stdev=4072.88 00:32:53.331 lat (usec): min=3100, max=55218, avg=9120.15, stdev=4072.97 00:32:53.331 clat percentiles (usec): 00:32:53.331 | 1.00th=[ 4621], 5.00th=[ 5538], 10.00th=[ 6063], 20.00th=[ 6783], 00:32:53.331 | 30.00th=[ 7439], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9372], 00:32:53.331 | 70.00th=[10028], 80.00th=[10814], 90.00th=[11994], 95.00th=[12387], 00:32:53.331 | 99.00th=[15533], 99.50th=[47449], 99.90th=[51643], 99.95th=[52167], 00:32:53.331 | 99.99th=[55313] 00:32:53.331 bw ( KiB/s): min=59296, max=85920, per=52.12%, avg=72088.00, stdev=14742.35, samples=4 00:32:53.331 iops : min= 3706, max= 5370, avg=4505.50, stdev=921.40, samples=4 00:32:53.331 write: IOPS=5434, BW=84.9MiB/s (89.0MB/s)(147MiB/1736msec); 0 zone resets 00:32:53.331 slat (usec): min=40, max=450, avg=41.21, stdev= 8.33 00:32:53.331 clat (usec): min=3240, max=16046, avg=9623.55, stdev=1567.42 00:32:53.331 lat (usec): min=3280, max=16185, avg=9664.76, stdev=1569.25 00:32:53.331 clat percentiles (usec): 00:32:53.331 | 1.00th=[ 6390], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 8356], 00:32:53.331 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[ 9896], 00:32:53.331 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11469], 95.00th=[12256], 00:32:53.331 | 99.00th=[14091], 99.50th=[14877], 99.90th=[15533], 99.95th=[15664], 00:32:53.331 | 99.99th=[16057] 00:32:53.331 bw ( KiB/s): min=60864, max=90080, per=86.35%, avg=75088.00, stdev=15600.79, samples=4 00:32:53.331 iops : min= 3804, max= 5630, avg=4693.00, stdev=975.05, samples=4 00:32:53.331 lat (msec) : 4=0.18%, 10=66.62%, 20=32.72%, 50=0.31%, 100=0.16% 00:32:53.331 cpu : usr=82.98%, sys=14.02%, ctx=19, majf=0, minf=16 00:32:53.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:32:53.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:53.331 issued rwts: total=17332,9435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:53.331 00:32:53.331 Run status group 0 (all jobs): 00:32:53.331 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=271MiB (284MB), run=2005-2005msec 00:32:53.331 WRITE: bw=84.9MiB/s (89.0MB/s), 84.9MiB/s-84.9MiB/s (89.0MB/s-89.0MB/s), io=147MiB (155MB), run=1736-1736msec 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # get_nvme_bdfs 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.331 Nvme0n1 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.331 20:24:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # ls_guid=8a3cadd9-dcff-4019-b0ce-ec320d733123 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # get_lvs_free_mb 8a3cadd9-dcff-4019-b0ce-ec320d733123 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=8a3cadd9-dcff-4019-b0ce-ec320d733123 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:32:53.901 { 00:32:53.901 "uuid": "8a3cadd9-dcff-4019-b0ce-ec320d733123", 00:32:53.901 "name": "lvs_0", 00:32:53.901 "base_bdev": "Nvme0n1", 00:32:53.901 "total_data_clusters": 1787, 00:32:53.901 "free_clusters": 1787, 00:32:53.901 "block_size": 512, 00:32:53.901 "cluster_size": 1073741824 00:32:53.901 } 00:32:53.901 ]' 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="8a3cadd9-dcff-4019-b0ce-ec320d733123") .free_clusters' 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=1787 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="8a3cadd9-dcff-4019-b0ce-ec320d733123") .cluster_size' 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=1829888 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 1829888 00:32:53.901 1829888 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 1829888 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.901 13f0c5ed-f28c-4e3b-b3eb-52a2d00c4cc8 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:53.901 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:53.902 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:53.902 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:32:53.902 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:54.193 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:54.193 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:54.193 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:54.193 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:54.193 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:54.193 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:54.193 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:54.193 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:54.193 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:54.193 20:24:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:54.459 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:54.459 fio-3.35 00:32:54.459 Starting 1 thread 00:32:54.459 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.002 00:32:57.002 test: (groupid=0, jobs=1): err= 0: pid=238051: Wed May 15 20:24:49 2024 00:32:57.002 read: IOPS=7228, BW=28.2MiB/s (29.6MB/s)(56.7MiB/2007msec) 00:32:57.002 slat (usec): min=2, max=111, avg= 2.25, stdev= 1.22 00:32:57.002 clat (usec): min=3453, max=15708, avg=9837.44, stdev=791.83 00:32:57.002 lat (usec): min=3471, max=15710, avg=9839.69, stdev=791.76 00:32:57.002 clat percentiles (usec): 00:32:57.002 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:32:57.002 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:32:57.002 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:32:57.002 | 99.00th=[11600], 99.50th=[11731], 99.90th=[14353], 99.95th=[15533], 00:32:57.002 | 99.99th=[15664] 00:32:57.002 bw ( KiB/s): min=27920, max=29424, per=99.79%, avg=28852.00, stdev=666.92, samples=4 00:32:57.002 iops : min= 6980, max= 7356, avg=7213.00, stdev=166.73, samples=4 00:32:57.002 write: IOPS=7194, BW=28.1MiB/s (29.5MB/s)(56.4MiB/2007msec); 0 zone resets 00:32:57.002 slat (nsec): min=2160, max=105421, avg=2349.85, stdev=910.41 00:32:57.002 clat (usec): min=1402, max=13305, avg=7818.52, stdev=661.44 00:32:57.002 lat (usec): min=1410, max=13307, avg=7820.87, stdev=661.41 00:32:57.002 clat percentiles (usec): 00:32:57.002 | 1.00th=[ 6259], 5.00th=[ 6783], 10.00th=[ 7046], 20.00th=[ 7308], 00:32:57.002 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:32:57.002 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:32:57.002 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11731], 99.95th=[11994], 00:32:57.002 | 99.99th=[13304] 00:32:57.002 bw ( KiB/s): min=28672, max=28920, per=100.00%, avg=28788.00, stdev=122.64, samples=4 00:32:57.002 iops : min= 7168, max= 7230, avg=7197.00, stdev=30.66, samples=4 00:32:57.002 lat (msec) : 2=0.01%, 4=0.07%, 10=79.17%, 20=20.75% 00:32:57.002 cpu : usr=66.10%, sys=30.76%, ctx=81, majf=0, minf=5 00:32:57.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:57.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.003 issued rwts: total=14507,14440,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.003 00:32:57.003 Run status group 0 (all jobs): 00:32:57.003 READ: bw=28.2MiB/s (29.6MB/s), 28.2MiB/s-28.2MiB/s (29.6MB/s-29.6MB/s), io=56.7MiB (59.4MB), run=2007-2007msec 00:32:57.003 WRITE: bw=28.1MiB/s (29.5MB/s), 28.1MiB/s-28.1MiB/s (29.5MB/s-29.5MB/s), io=56.4MiB (59.1MB), run=2007-2007msec 00:32:57.003 20:24:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:57.003 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.003 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.003 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.003 20:24:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:57.003 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.003 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.577 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.577 20:24:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # ls_nested_guid=f8b7e1c6-7f69-4f4c-99fd-c36b46dc93f3 00:32:57.577 20:24:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@63 -- # get_lvs_free_mb f8b7e1c6-7f69-4f4c-99fd-c36b46dc93f3 00:32:57.577 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=f8b7e1c6-7f69-4f4c-99fd-c36b46dc93f3 00:32:57.577 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:32:57.577 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:32:57.577 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:32:57.577 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:32:57.577 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.577 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.577 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.577 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:32:57.577 { 00:32:57.577 "uuid": "8a3cadd9-dcff-4019-b0ce-ec320d733123", 00:32:57.577 "name": "lvs_0", 00:32:57.577 "base_bdev": "Nvme0n1", 00:32:57.578 "total_data_clusters": 1787, 00:32:57.578 "free_clusters": 0, 00:32:57.578 "block_size": 512, 00:32:57.578 "cluster_size": 1073741824 00:32:57.578 }, 00:32:57.578 { 00:32:57.578 "uuid": "f8b7e1c6-7f69-4f4c-99fd-c36b46dc93f3", 00:32:57.578 "name": "lvs_n_0", 00:32:57.578 "base_bdev": "13f0c5ed-f28c-4e3b-b3eb-52a2d00c4cc8", 00:32:57.578 "total_data_clusters": 457025, 00:32:57.578 "free_clusters": 457025, 00:32:57.578 "block_size": 512, 00:32:57.578 "cluster_size": 4194304 00:32:57.578 } 00:32:57.578 ]' 00:32:57.578 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="f8b7e1c6-7f69-4f4c-99fd-c36b46dc93f3") .free_clusters' 00:32:57.578 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=457025 00:32:57.578 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="f8b7e1c6-7f69-4f4c-99fd-c36b46dc93f3") .cluster_size' 00:32:57.578 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:32:57.578 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=1828100 00:32:57.578 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 1828100 00:32:57.578 1828100 00:32:57.578 20:24:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:32:57.578 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.578 20:24:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.519 97781c49-e52f-4812-8d2f-7e752ee67454 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:58.519 20:24:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:58.779 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:58.779 fio-3.35 00:32:58.779 Starting 1 thread 00:32:58.779 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.325 00:33:01.325 test: (groupid=0, jobs=1): err= 0: pid=239095: Wed May 15 20:24:53 2024 00:33:01.325 read: IOPS=6382, BW=24.9MiB/s (26.1MB/s)(50.1MiB/2009msec) 00:33:01.325 slat (usec): min=2, max=110, avg= 2.27, stdev= 1.30 00:33:01.325 clat (usec): min=3893, max=18262, avg=11109.13, stdev=897.19 00:33:01.325 lat (usec): min=3910, max=18265, avg=11111.40, stdev=897.11 00:33:01.325 clat percentiles (usec): 00:33:01.325 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:33:01.325 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:33:01.325 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12125], 95.00th=[12518], 00:33:01.325 | 99.00th=[13042], 99.50th=[13304], 99.90th=[15533], 99.95th=[18220], 00:33:01.325 | 99.99th=[18220] 00:33:01.325 bw ( KiB/s): min=24424, max=25968, per=99.95%, avg=25516.00, stdev=733.28, samples=4 00:33:01.325 iops : min= 6106, max= 6492, avg=6379.00, stdev=183.32, samples=4 00:33:01.325 write: IOPS=6381, BW=24.9MiB/s (26.1MB/s)(50.1MiB/2009msec); 0 zone resets 00:33:01.325 slat (nsec): min=2152, max=94310, avg=2367.62, stdev=879.57 00:33:01.325 clat (usec): min=1892, max=16797, avg=8820.86, stdev=798.03 00:33:01.325 lat (usec): min=1900, max=16799, avg=8823.23, stdev=797.99 00:33:01.325 clat percentiles (usec): 00:33:01.325 | 1.00th=[ 7046], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8225], 00:33:01.325 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:33:01.325 | 70.00th=[ 9241], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[ 9896], 00:33:01.325 | 99.00th=[10552], 99.50th=[10814], 99.90th=[15270], 99.95th=[15533], 00:33:01.325 | 99.99th=[16712] 00:33:01.325 bw ( KiB/s): min=25344, max=25720, per=99.96%, avg=25516.00, stdev=162.19, samples=4 00:33:01.325 iops : min= 6336, max= 6430, avg=6379.00, stdev=40.55, samples=4 00:33:01.325 lat (msec) : 2=0.01%, 4=0.09%, 10=52.57%, 20=47.33% 00:33:01.325 cpu : usr=66.24%, sys=30.98%, ctx=72, majf=0, minf=5 00:33:01.325 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:01.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:01.325 issued rwts: total=12822,12821,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.325 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:01.325 00:33:01.325 Run status group 0 (all jobs): 00:33:01.325 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.1MiB (52.5MB), run=2009-2009msec 00:33:01.325 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.1MiB (52.5MB), run=2009-2009msec 00:33:01.325 20:24:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:01.325 20:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.325 20:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.325 20:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.325 20:24:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # sync 00:33:01.325 20:24:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:01.325 20:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.325 20:24:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.238 20:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.238 20:24:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:33:03.238 20:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.238 20:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.238 20:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.238 20:24:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:33:03.238 20:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.238 20:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.523 20:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.523 20:24:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:33:03.523 20:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.523 20:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.523 20:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.523 20:24:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:33:03.523 20:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.523 20:24:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:05.435 rmmod nvme_tcp 00:33:05.435 rmmod nvme_fabrics 00:33:05.435 rmmod nvme_keyring 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 236045 ']' 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 236045 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 236045 ']' 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 236045 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 236045 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 236045' 00:33:05.435 killing process with pid 236045 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 236045 00:33:05.435 [2024-05-15 20:24:57.871094] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:05.435 20:24:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 236045 00:33:05.697 20:24:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:05.697 20:24:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:05.697 20:24:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:05.697 20:24:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:05.697 20:24:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:05.697 20:24:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.697 20:24:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:05.697 20:24:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.609 20:25:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:07.609 00:33:07.609 real 0m29.278s 00:33:07.609 user 2m24.362s 00:33:07.609 sys 0m9.553s 00:33:07.609 20:25:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:07.609 20:25:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.609 ************************************ 00:33:07.609 END TEST nvmf_fio_host 00:33:07.609 ************************************ 00:33:07.870 20:25:00 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:07.870 20:25:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:07.870 20:25:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:07.870 20:25:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:07.870 ************************************ 00:33:07.870 START TEST nvmf_failover 00:33:07.870 ************************************ 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:07.870 * Looking for test storage... 00:33:07.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.870 20:25:00 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:33:07.871 20:25:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:16.013 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:16.013 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:16.013 Found net devices under 0000:31:00.0: cvl_0_0 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:16.013 Found net devices under 0000:31:00.1: cvl_0_1 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:16.013 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:16.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:33:16.014 00:33:16.014 --- 10.0.0.2 ping statistics --- 00:33:16.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.014 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:16.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:33:16.014 00:33:16.014 --- 10.0.0.1 ping statistics --- 00:33:16.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.014 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:16.014 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:16.274 20:25:08 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:16.274 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:16.274 20:25:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:16.274 20:25:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:16.274 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=244855 00:33:16.274 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 244855 00:33:16.274 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:16.274 20:25:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 244855 ']' 00:33:16.274 20:25:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.274 20:25:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:16.274 20:25:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.274 20:25:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:16.275 20:25:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:16.275 [2024-05-15 20:25:08.585861] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:33:16.275 [2024-05-15 20:25:08.585912] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.275 EAL: No free 2048 kB hugepages reported on node 1 00:33:16.275 [2024-05-15 20:25:08.655201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:16.275 [2024-05-15 20:25:08.722099] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.275 [2024-05-15 20:25:08.722133] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.275 [2024-05-15 20:25:08.722141] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.275 [2024-05-15 20:25:08.722151] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.275 [2024-05-15 20:25:08.722157] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.275 [2024-05-15 20:25:08.722191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.275 [2024-05-15 20:25:08.722363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:16.275 [2024-05-15 20:25:08.722546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.536 20:25:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:16.536 20:25:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:33:16.536 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:16.536 20:25:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.536 20:25:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:16.536 20:25:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:16.536 20:25:08 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:16.536 [2024-05-15 20:25:08.992629] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:16.536 20:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:16.796 Malloc0 00:33:16.796 20:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:17.057 20:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:17.317 20:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:17.578 [2024-05-15 20:25:09.874078] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:17.578 [2024-05-15 20:25:09.874319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.578 20:25:09 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:17.838 [2024-05-15 20:25:10.086889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:17.838 20:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:17.838 [2024-05-15 20:25:10.299543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:17.838 20:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=245221 00:33:17.838 20:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:17.838 20:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:17.838 20:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 245221 /var/tmp/bdevperf.sock 00:33:17.838 20:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 245221 ']' 00:33:17.838 20:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:17.838 20:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:17.838 20:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:17.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:17.838 20:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:17.838 20:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:18.098 20:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:18.098 20:25:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:33:18.098 20:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:18.358 NVMe0n1 00:33:18.358 20:25:10 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:18.929 00:33:18.929 20:25:11 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=245471 00:33:18.929 20:25:11 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:18.929 20:25:11 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:19.871 20:25:12 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:19.871 [2024-05-15 20:25:12.348950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.348993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.348999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349032] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349128] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:19.871 [2024-05-15 20:25:12.349288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c9270 is same with the state(5) to be set 00:33:20.131 20:25:12 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:23.445 20:25:15 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:23.445 00:33:23.445 20:25:15 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:23.445 [2024-05-15 20:25:15.791180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.445 [2024-05-15 20:25:15.791228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.445 [2024-05-15 20:25:15.791236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.445 [2024-05-15 20:25:15.791243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.445 [2024-05-15 20:25:15.791249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.445 [2024-05-15 20:25:15.791256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.445 [2024-05-15 20:25:15.791262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.445 [2024-05-15 20:25:15.791269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.445 [2024-05-15 20:25:15.791275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.445 [2024-05-15 20:25:15.791281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.445 [2024-05-15 20:25:15.791287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.445 [2024-05-15 20:25:15.791293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.445 [2024-05-15 20:25:15.791300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.445 [2024-05-15 20:25:15.791306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.445 [2024-05-15 20:25:15.791317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.445 [2024-05-15 20:25:15.791324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.446 [2024-05-15 20:25:15.791330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.446 [2024-05-15 20:25:15.791342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.446 [2024-05-15 20:25:15.791349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.446 [2024-05-15 20:25:15.791355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.446 [2024-05-15 20:25:15.791362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.446 [2024-05-15 20:25:15.791368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.446 [2024-05-15 20:25:15.791374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.446 [2024-05-15 20:25:15.791381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.446 [2024-05-15 20:25:15.791387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.446 [2024-05-15 20:25:15.791393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.446 [2024-05-15 20:25:15.791400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.446 [2024-05-15 20:25:15.791406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.446 [2024-05-15 20:25:15.791413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.446 [2024-05-15 20:25:15.791421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ca7d0 is same with the state(5) to be set 00:33:23.446 20:25:15 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:26.746 20:25:18 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:26.746 [2024-05-15 20:25:18.968935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.746 20:25:18 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:27.686 20:25:20 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:27.946 [2024-05-15 20:25:20.200332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21caeb0 is same with the state(5) to be set 00:33:27.946 [2024-05-15 20:25:20.200374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21caeb0 is same with the state(5) to be set 00:33:27.946 [2024-05-15 20:25:20.200382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21caeb0 is same with the state(5) to be set 00:33:27.946 20:25:20 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 245471 00:33:34.543 0 00:33:34.543 20:25:26 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 245221 00:33:34.543 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 245221 ']' 00:33:34.543 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 245221 00:33:34.543 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:33:34.543 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:34.543 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 245221 00:33:34.543 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:34.543 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:34.544 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 245221' 00:33:34.544 killing process with pid 245221 00:33:34.544 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 245221 00:33:34.544 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 245221 00:33:34.544 20:25:26 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:34.544 [2024-05-15 20:25:10.375115] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:33:34.544 [2024-05-15 20:25:10.375172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245221 ] 00:33:34.544 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.544 [2024-05-15 20:25:10.442006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.544 [2024-05-15 20:25:10.506461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.544 Running I/O for 15 seconds... 00:33:34.544 [2024-05-15 20:25:12.349893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.349929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.349945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.349953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.349963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.349970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.349980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.349987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.349996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.544 [2024-05-15 20:25:12.350515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.544 [2024-05-15 20:25:12.350525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.350989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.350996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.351006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.351013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.351021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.351029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.351037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.351045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.351054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.351061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.351069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.351076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.351085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.351093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.351102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.351109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.351117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.351124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.351138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.351145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.351154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.351161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.351170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.545 [2024-05-15 20:25:12.351177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.545 [2024-05-15 20:25:12.351185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.546 [2024-05-15 20:25:12.351728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.546 [2024-05-15 20:25:12.351745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.546 [2024-05-15 20:25:12.351761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.546 [2024-05-15 20:25:12.351777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.546 [2024-05-15 20:25:12.351793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.546 [2024-05-15 20:25:12.351808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.546 [2024-05-15 20:25:12.351824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.546 [2024-05-15 20:25:12.351834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.546 [2024-05-15 20:25:12.351840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:12.351849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.547 [2024-05-15 20:25:12.351856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:12.351864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.547 [2024-05-15 20:25:12.351871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:12.351880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.547 [2024-05-15 20:25:12.351887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:12.351896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.547 [2024-05-15 20:25:12.351903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:12.351911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.547 [2024-05-15 20:25:12.351919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:12.351928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.547 [2024-05-15 20:25:12.351934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:12.351945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.547 [2024-05-15 20:25:12.351952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:12.351960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.547 [2024-05-15 20:25:12.351967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:12.351976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.547 [2024-05-15 20:25:12.351983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:12.352002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.547 [2024-05-15 20:25:12.352008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.547 [2024-05-15 20:25:12.352014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100664 len:8 PRP1 0x0 PRP2 0x0 00:33:34.547 [2024-05-15 20:25:12.352022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:12.352059] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13e3d80 was disconnected and freed. reset controller. 00:33:34.547 [2024-05-15 20:25:12.352075] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:34.547 [2024-05-15 20:25:12.352094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.547 [2024-05-15 20:25:12.352101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:12.352110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.547 [2024-05-15 20:25:12.352117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:12.352125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.547 [2024-05-15 20:25:12.352133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:12.352140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.547 [2024-05-15 20:25:12.352147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:12.352154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.547 [2024-05-15 20:25:12.355801] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.547 [2024-05-15 20:25:12.355825] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c5910 (9): Bad file descriptor 00:33:34.547 [2024-05-15 20:25:12.478696] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:34.547 [2024-05-15 20:25:15.793210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.547 [2024-05-15 20:25:15.793432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.547 [2024-05-15 20:25:15.793448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.547 [2024-05-15 20:25:15.793648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.547 [2024-05-15 20:25:15.793657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.548 [2024-05-15 20:25:15.793665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.548 [2024-05-15 20:25:15.793683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.548 [2024-05-15 20:25:15.793699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.548 [2024-05-15 20:25:15.793718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.548 [2024-05-15 20:25:15.793736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.548 [2024-05-15 20:25:15.793754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.548 [2024-05-15 20:25:15.793771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.548 [2024-05-15 20:25:15.793788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.548 [2024-05-15 20:25:15.793805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.548 [2024-05-15 20:25:15.793822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.548 [2024-05-15 20:25:15.793838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.793855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.793871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.793887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.793904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.793921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.793938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.793955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.793971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.793988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.793997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.794004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.794014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.794021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.794030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.794038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.794048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.794056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.794065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.794073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.794082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.794089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.794098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.794107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.794116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.794124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.794133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.794140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.794151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.794159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.548 [2024-05-15 20:25:15.794168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.548 [2024-05-15 20:25:15.794175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:116736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.549 [2024-05-15 20:25:15.794641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.549 [2024-05-15 20:25:15.794674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116848 len:8 PRP1 0x0 PRP2 0x0 00:33:34.549 [2024-05-15 20:25:15.794681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.549 [2024-05-15 20:25:15.794697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.549 [2024-05-15 20:25:15.794704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116856 len:8 PRP1 0x0 PRP2 0x0 00:33:34.549 [2024-05-15 20:25:15.794711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.549 [2024-05-15 20:25:15.794724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.549 [2024-05-15 20:25:15.794730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116864 len:8 PRP1 0x0 PRP2 0x0 00:33:34.549 [2024-05-15 20:25:15.794737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.549 [2024-05-15 20:25:15.794751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.549 [2024-05-15 20:25:15.794757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116872 len:8 PRP1 0x0 PRP2 0x0 00:33:34.549 [2024-05-15 20:25:15.794764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.549 [2024-05-15 20:25:15.794777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.549 [2024-05-15 20:25:15.794783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116880 len:8 PRP1 0x0 PRP2 0x0 00:33:34.549 [2024-05-15 20:25:15.794790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.549 [2024-05-15 20:25:15.794805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.549 [2024-05-15 20:25:15.794811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116888 len:8 PRP1 0x0 PRP2 0x0 00:33:34.549 [2024-05-15 20:25:15.794818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.549 [2024-05-15 20:25:15.794832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.549 [2024-05-15 20:25:15.794838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116896 len:8 PRP1 0x0 PRP2 0x0 00:33:34.549 [2024-05-15 20:25:15.794844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.549 [2024-05-15 20:25:15.794853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.549 [2024-05-15 20:25:15.794858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.549 [2024-05-15 20:25:15.794864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116904 len:8 PRP1 0x0 PRP2 0x0 00:33:34.549 [2024-05-15 20:25:15.794871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.794881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.794886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.794892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116912 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.794899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.794906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.794912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.794918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116920 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.794925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.794932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.794938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.794944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116928 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.794951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.794958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.794964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.794970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116936 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.794977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.794984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.794989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.794995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116944 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116952 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116960 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116968 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116976 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116984 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116992 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117000 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117008 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117016 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117024 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117032 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117040 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117048 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117056 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117064 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117072 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117080 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117088 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.550 [2024-05-15 20:25:15.795499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.550 [2024-05-15 20:25:15.795505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117096 len:8 PRP1 0x0 PRP2 0x0 00:33:34.550 [2024-05-15 20:25:15.795512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.550 [2024-05-15 20:25:15.795520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.551 [2024-05-15 20:25:15.795525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.551 [2024-05-15 20:25:15.795531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117104 len:8 PRP1 0x0 PRP2 0x0 00:33:34.551 [2024-05-15 20:25:15.795538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.551 [2024-05-15 20:25:15.795552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.551 [2024-05-15 20:25:15.795558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117112 len:8 PRP1 0x0 PRP2 0x0 00:33:34.551 [2024-05-15 20:25:15.795565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.551 [2024-05-15 20:25:15.795579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.551 [2024-05-15 20:25:15.795585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117120 len:8 PRP1 0x0 PRP2 0x0 00:33:34.551 [2024-05-15 20:25:15.795592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.551 [2024-05-15 20:25:15.795605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.551 [2024-05-15 20:25:15.795611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117128 len:8 PRP1 0x0 PRP2 0x0 00:33:34.551 [2024-05-15 20:25:15.795620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.551 [2024-05-15 20:25:15.795633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.551 [2024-05-15 20:25:15.795639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117136 len:8 PRP1 0x0 PRP2 0x0 00:33:34.551 [2024-05-15 20:25:15.795647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.551 [2024-05-15 20:25:15.795660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.551 [2024-05-15 20:25:15.795666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117144 len:8 PRP1 0x0 PRP2 0x0 00:33:34.551 [2024-05-15 20:25:15.795673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.551 [2024-05-15 20:25:15.795686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.551 [2024-05-15 20:25:15.795692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117152 len:8 PRP1 0x0 PRP2 0x0 00:33:34.551 [2024-05-15 20:25:15.795699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.551 [2024-05-15 20:25:15.795712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.551 [2024-05-15 20:25:15.795718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117160 len:8 PRP1 0x0 PRP2 0x0 00:33:34.551 [2024-05-15 20:25:15.795726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.551 [2024-05-15 20:25:15.795739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.551 [2024-05-15 20:25:15.795745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117168 len:8 PRP1 0x0 PRP2 0x0 00:33:34.551 [2024-05-15 20:25:15.795753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.551 [2024-05-15 20:25:15.795766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.551 [2024-05-15 20:25:15.795772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117176 len:8 PRP1 0x0 PRP2 0x0 00:33:34.551 [2024-05-15 20:25:15.795778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.551 [2024-05-15 20:25:15.795792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.551 [2024-05-15 20:25:15.795797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117184 len:8 PRP1 0x0 PRP2 0x0 00:33:34.551 [2024-05-15 20:25:15.795805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.551 [2024-05-15 20:25:15.795818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.551 [2024-05-15 20:25:15.795828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117192 len:8 PRP1 0x0 PRP2 0x0 00:33:34.551 [2024-05-15 20:25:15.795835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.551 [2024-05-15 20:25:15.795849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.551 [2024-05-15 20:25:15.795856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117200 len:8 PRP1 0x0 PRP2 0x0 00:33:34.551 [2024-05-15 20:25:15.795863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.551 [2024-05-15 20:25:15.795877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.551 [2024-05-15 20:25:15.795883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117208 len:8 PRP1 0x0 PRP2 0x0 00:33:34.551 [2024-05-15 20:25:15.795890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795925] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13bfdd0 was disconnected and freed. reset controller. 00:33:34.551 [2024-05-15 20:25:15.795935] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:34.551 [2024-05-15 20:25:15.795955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.551 [2024-05-15 20:25:15.795964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.551 [2024-05-15 20:25:15.795980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.795988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.551 [2024-05-15 20:25:15.795995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.796003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.551 [2024-05-15 20:25:15.796010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:15.796018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.551 [2024-05-15 20:25:15.796041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c5910 (9): Bad file descriptor 00:33:34.551 [2024-05-15 20:25:15.799643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.551 [2024-05-15 20:25:15.969870] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:34.551 [2024-05-15 20:25:20.201755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.551 [2024-05-15 20:25:20.201795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:20.201813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.551 [2024-05-15 20:25:20.201822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:20.201839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.551 [2024-05-15 20:25:20.201847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:20.201859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.551 [2024-05-15 20:25:20.201867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:20.201877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.551 [2024-05-15 20:25:20.201884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:20.201893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.551 [2024-05-15 20:25:20.201900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:20.201909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.551 [2024-05-15 20:25:20.201917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:20.201928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.551 [2024-05-15 20:25:20.201936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:20.201946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.551 [2024-05-15 20:25:20.201953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:20.201965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.551 [2024-05-15 20:25:20.201974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.551 [2024-05-15 20:25:20.201983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.551 [2024-05-15 20:25:20.201992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:34.552 [2024-05-15 20:25:20.202324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.552 [2024-05-15 20:25:20.202341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.552 [2024-05-15 20:25:20.202357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.552 [2024-05-15 20:25:20.202373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.552 [2024-05-15 20:25:20.202389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.552 [2024-05-15 20:25:20.202406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.552 [2024-05-15 20:25:20.202424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.552 [2024-05-15 20:25:20.202440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.552 [2024-05-15 20:25:20.202455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.552 [2024-05-15 20:25:20.202473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.552 [2024-05-15 20:25:20.202489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.552 [2024-05-15 20:25:20.202505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.552 [2024-05-15 20:25:20.202522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.552 [2024-05-15 20:25:20.202538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.552 [2024-05-15 20:25:20.202554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.552 [2024-05-15 20:25:20.202563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.552 [2024-05-15 20:25:20.202569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.202984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.202991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.203000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.203006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.203016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.203023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.203032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.203039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.203048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.203054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.203063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.203071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.203079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.203088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.203097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:34.553 [2024-05-15 20:25:20.203104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.203128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.553 [2024-05-15 20:25:20.203136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82168 len:8 PRP1 0x0 PRP2 0x0 00:33:34.553 [2024-05-15 20:25:20.203143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.203154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.553 [2024-05-15 20:25:20.203160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.553 [2024-05-15 20:25:20.203166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82176 len:8 PRP1 0x0 PRP2 0x0 00:33:34.553 [2024-05-15 20:25:20.203173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.203181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.553 [2024-05-15 20:25:20.203186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.553 [2024-05-15 20:25:20.203191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82184 len:8 PRP1 0x0 PRP2 0x0 00:33:34.553 [2024-05-15 20:25:20.203198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.203205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.553 [2024-05-15 20:25:20.203211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.553 [2024-05-15 20:25:20.203216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82192 len:8 PRP1 0x0 PRP2 0x0 00:33:34.553 [2024-05-15 20:25:20.203224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.553 [2024-05-15 20:25:20.203231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.553 [2024-05-15 20:25:20.203237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.553 [2024-05-15 20:25:20.203243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82200 len:8 PRP1 0x0 PRP2 0x0 00:33:34.553 [2024-05-15 20:25:20.203250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82208 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82216 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82224 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82232 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82240 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82248 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82256 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82264 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82272 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82280 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82288 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82296 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82304 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82312 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82320 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82328 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82336 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82344 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82352 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82360 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82368 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82376 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82384 len:8 PRP1 0x0 PRP2 0x0 00:33:34.554 [2024-05-15 20:25:20.203850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.554 [2024-05-15 20:25:20.203857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.554 [2024-05-15 20:25:20.203863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.554 [2024-05-15 20:25:20.203870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82392 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.203876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.203883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.203889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.203895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82400 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.203901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.203908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.203915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.203921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82408 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.203928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.203936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.203941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.203947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82416 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.203954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.203961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.203967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.203973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82424 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.203980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.203987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.203992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.203997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82432 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.204004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.204012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.204018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.204024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82440 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.204030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.204038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.204043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.204049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82448 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.204056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.204063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.204068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.204074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82456 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.204082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.204089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.204094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.204100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82464 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.204106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.204115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.204120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.204126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82472 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.204134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.204141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.204146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.204152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82480 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.204158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.214661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.214688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.214698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82488 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.214709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.214716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.214722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.214728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82496 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.214735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.214743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.214749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.214755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82504 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.214761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.214769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.214775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.214781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82512 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.214788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.214795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.214800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.214806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82520 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.214813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.214821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.214827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.214832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81752 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.214844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.214852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.214857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.214863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81760 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.214870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.214878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.214885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.214891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81768 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.214898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.214906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:34.555 [2024-05-15 20:25:20.214911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:34.555 [2024-05-15 20:25:20.214917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81776 len:8 PRP1 0x0 PRP2 0x0 00:33:34.555 [2024-05-15 20:25:20.214924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.214964] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13bfd80 was disconnected and freed. reset controller. 00:33:34.555 [2024-05-15 20:25:20.214974] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:34.555 [2024-05-15 20:25:20.215000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.555 [2024-05-15 20:25:20.215009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.215019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.555 [2024-05-15 20:25:20.215026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.215034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.555 [2024-05-15 20:25:20.215041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.215049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.555 [2024-05-15 20:25:20.215056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.555 [2024-05-15 20:25:20.215063] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:34.555 [2024-05-15 20:25:20.215101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c5910 (9): Bad file descriptor 00:33:34.555 [2024-05-15 20:25:20.218708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:34.555 [2024-05-15 20:25:20.257531] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:34.555 00:33:34.555 Latency(us) 00:33:34.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.556 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:34.556 Verification LBA range: start 0x0 length 0x4000 00:33:34.556 NVMe0n1 : 15.01 9004.20 35.17 807.60 0.00 13017.21 761.17 23156.05 00:33:34.556 =================================================================================================================== 00:33:34.556 Total : 9004.20 35.17 807.60 0.00 13017.21 761.17 23156.05 00:33:34.556 Received shutdown signal, test time was about 15.000000 seconds 00:33:34.556 00:33:34.556 Latency(us) 00:33:34.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.556 =================================================================================================================== 00:33:34.556 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:34.556 20:25:26 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:34.556 20:25:26 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:34.556 20:25:26 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:34.556 20:25:26 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=248346 00:33:34.556 20:25:26 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 248346 /var/tmp/bdevperf.sock 00:33:34.556 20:25:26 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:34.556 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 248346 ']' 00:33:34.556 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:34.556 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:34.556 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:34.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:34.556 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:34.556 20:25:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:35.137 20:25:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:35.137 20:25:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:33:35.137 20:25:27 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:35.137 [2024-05-15 20:25:27.526352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:35.137 20:25:27 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:35.398 [2024-05-15 20:25:27.698804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:35.398 20:25:27 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:35.658 NVMe0n1 00:33:35.919 20:25:28 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:36.208 00:33:36.208 20:25:28 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:36.506 00:33:36.507 20:25:28 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:36.507 20:25:28 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:36.799 20:25:29 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:36.799 20:25:29 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:40.114 20:25:32 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:40.114 20:25:32 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:40.114 20:25:32 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:40.114 20:25:32 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=249518 00:33:40.114 20:25:32 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 249518 00:33:41.054 0 00:33:41.054 20:25:33 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:41.054 [2024-05-15 20:25:26.552682] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:33:41.054 [2024-05-15 20:25:26.552739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248346 ] 00:33:41.054 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.054 [2024-05-15 20:25:26.635762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.054 [2024-05-15 20:25:26.699163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.054 [2024-05-15 20:25:29.235353] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:41.054 [2024-05-15 20:25:29.235398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.054 [2024-05-15 20:25:29.235409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.054 [2024-05-15 20:25:29.235419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.054 [2024-05-15 20:25:29.235426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.055 [2024-05-15 20:25:29.235434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.055 [2024-05-15 20:25:29.235441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.055 [2024-05-15 20:25:29.235448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:41.055 [2024-05-15 20:25:29.235455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:41.055 [2024-05-15 20:25:29.235462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:41.055 [2024-05-15 20:25:29.235486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:41.055 [2024-05-15 20:25:29.235500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1807910 (9): Bad file descriptor 00:33:41.055 [2024-05-15 20:25:29.246624] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:41.055 Running I/O for 1 seconds... 00:33:41.055 00:33:41.055 Latency(us) 00:33:41.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.055 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:41.055 Verification LBA range: start 0x0 length 0x4000 00:33:41.055 NVMe0n1 : 1.01 8891.65 34.73 0.00 0.00 14336.12 2744.32 11796.48 00:33:41.055 =================================================================================================================== 00:33:41.055 Total : 8891.65 34.73 0.00 0.00 14336.12 2744.32 11796.48 00:33:41.315 20:25:33 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:41.315 20:25:33 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:41.315 20:25:33 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:41.576 20:25:33 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:41.576 20:25:33 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:41.836 20:25:34 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:42.097 20:25:34 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:45.400 20:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:45.400 20:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:45.400 20:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 248346 00:33:45.400 20:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 248346 ']' 00:33:45.400 20:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 248346 00:33:45.400 20:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:33:45.400 20:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:45.400 20:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 248346 00:33:45.400 20:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:45.400 20:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:45.400 20:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 248346' 00:33:45.400 killing process with pid 248346 00:33:45.400 20:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 248346 00:33:45.400 20:25:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 248346 00:33:45.400 20:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:45.400 20:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:45.660 20:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:45.660 20:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:45.660 20:25:37 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:45.661 20:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:45.661 20:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:33:45.661 20:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:45.661 20:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:33:45.661 20:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:45.661 20:25:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:45.661 rmmod nvme_tcp 00:33:45.661 rmmod nvme_fabrics 00:33:45.661 rmmod nvme_keyring 00:33:45.661 20:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:45.661 20:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:33:45.661 20:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:33:45.661 20:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 244855 ']' 00:33:45.661 20:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 244855 00:33:45.661 20:25:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 244855 ']' 00:33:45.661 20:25:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 244855 00:33:45.661 20:25:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:33:45.661 20:25:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:45.661 20:25:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 244855 00:33:45.661 20:25:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:45.661 20:25:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:45.661 20:25:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 244855' 00:33:45.661 killing process with pid 244855 00:33:45.661 20:25:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 244855 00:33:45.661 [2024-05-15 20:25:38.072375] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:45.661 20:25:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 244855 00:33:45.920 20:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:45.920 20:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:45.920 20:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:45.920 20:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:45.920 20:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:45.920 20:25:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.920 20:25:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:45.920 20:25:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.835 20:25:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:47.835 00:33:47.835 real 0m40.119s 00:33:47.835 user 2m0.840s 00:33:47.835 sys 0m9.173s 00:33:47.835 20:25:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:47.835 20:25:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:47.835 ************************************ 00:33:47.835 END TEST nvmf_failover 00:33:47.835 ************************************ 00:33:48.097 20:25:40 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:48.097 20:25:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:48.097 20:25:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:48.097 20:25:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:48.097 ************************************ 00:33:48.097 START TEST nvmf_host_discovery 00:33:48.097 ************************************ 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:48.097 * Looking for test storage... 00:33:48.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:33:48.097 20:25:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:56.241 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:56.241 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.241 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:56.242 Found net devices under 0000:31:00.0: cvl_0_0 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:56.242 Found net devices under 0000:31:00.1: cvl_0_1 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:56.242 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:56.503 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:56.503 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:56.503 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:56.503 20:25:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:56.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:56.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:33:56.765 00:33:56.765 --- 10.0.0.2 ping statistics --- 00:33:56.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:56.765 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:56.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:56.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:33:56.765 00:33:56.765 --- 10.0.0.1 ping statistics --- 00:33:56.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:56.765 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=255212 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 255212 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 255212 ']' 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:56.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:56.765 20:25:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:56.765 [2024-05-15 20:25:49.209378] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:33:56.765 [2024-05-15 20:25:49.209451] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:56.765 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.027 [2024-05-15 20:25:49.287419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.027 [2024-05-15 20:25:49.356292] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:57.027 [2024-05-15 20:25:49.356334] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:57.027 [2024-05-15 20:25:49.356342] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:57.027 [2024-05-15 20:25:49.356348] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:57.027 [2024-05-15 20:25:49.356353] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:57.027 [2024-05-15 20:25:49.356379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:57.598 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:57.598 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:33:57.598 20:25:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:57.598 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:57.598 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.859 [2024-05-15 20:25:50.130889] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.859 [2024-05-15 20:25:50.142865] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:57.859 [2024-05-15 20:25:50.143087] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.859 null0 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.859 null1 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=255517 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 255517 /tmp/host.sock 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 255517 ']' 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:57.859 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:57.859 20:25:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:57.859 [2024-05-15 20:25:50.227771] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:33:57.859 [2024-05-15 20:25:50.227817] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid255517 ] 00:33:57.859 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.859 [2024-05-15 20:25:50.308233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.119 [2024-05-15 20:25:50.372777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.691 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.951 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:58.951 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:58.951 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.951 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.951 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.951 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:58.951 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:58.951 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.951 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:58.951 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.951 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:58.951 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:58.951 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.951 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:58.952 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.214 [2024-05-15 20:25:51.462476] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:33:59.214 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:59.215 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:59.215 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.215 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:59.215 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:59.215 20:25:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:59.215 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.215 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:33:59.215 20:25:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:33:59.786 [2024-05-15 20:25:52.120238] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:59.786 [2024-05-15 20:25:52.120263] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:59.786 [2024-05-15 20:25:52.120280] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:59.786 [2024-05-15 20:25:52.210540] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:00.046 [2024-05-15 20:25:52.311310] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:00.046 [2024-05-15 20:25:52.311336] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:00.307 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.567 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:00.568 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:00.568 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:00.568 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:00.568 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:00.568 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:34:00.568 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:00.568 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:00.568 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.568 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:00.568 20:25:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:00.568 20:25:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:00.828 [2024-05-15 20:25:53.219228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:00.828 [2024-05-15 20:25:53.220048] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:00.828 [2024-05-15 20:25:53.220075] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:00.828 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:00.829 [2024-05-15 20:25:53.307691] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:00.829 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:01.089 20:25:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:34:01.089 [2024-05-15 20:25:53.574056] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:01.089 [2024-05-15 20:25:53.574073] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:01.089 [2024-05-15 20:25:53.574078] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.032 [2024-05-15 20:25:54.503650] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:02.032 [2024-05-15 20:25:54.503671] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:02.032 [2024-05-15 20:25:54.505044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.032 [2024-05-15 20:25:54.505062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.032 [2024-05-15 20:25:54.505071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.032 [2024-05-15 20:25:54.505083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.032 [2024-05-15 20:25:54.505091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.032 [2024-05-15 20:25:54.505098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.032 [2024-05-15 20:25:54.505106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:02.032 [2024-05-15 20:25:54.505112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.032 [2024-05-15 20:25:54.505119] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb5e40 is same with the state(5) to be set 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:02.032 [2024-05-15 20:25:54.515059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb5e40 (9): Bad file descriptor 00:34:02.032 [2024-05-15 20:25:54.525098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:02.032 [2024-05-15 20:25:54.525503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.032 [2024-05-15 20:25:54.525952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.032 [2024-05-15 20:25:54.525965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb5e40 with addr=10.0.0.2, port=4420 00:34:02.032 [2024-05-15 20:25:54.525975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb5e40 is same with the state(5) to be set 00:34:02.032 [2024-05-15 20:25:54.525993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb5e40 (9): Bad file descriptor 00:34:02.032 [2024-05-15 20:25:54.526020] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:02.032 [2024-05-15 20:25:54.526028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:02.032 [2024-05-15 20:25:54.526036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:02.032 [2024-05-15 20:25:54.526052] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.032 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.294 [2024-05-15 20:25:54.535155] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:02.294 [2024-05-15 20:25:54.535640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.294 [2024-05-15 20:25:54.535900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.294 [2024-05-15 20:25:54.535918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb5e40 with addr=10.0.0.2, port=4420 00:34:02.294 [2024-05-15 20:25:54.535927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb5e40 is same with the state(5) to be set 00:34:02.294 [2024-05-15 20:25:54.535945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb5e40 (9): Bad file descriptor 00:34:02.294 [2024-05-15 20:25:54.535969] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:02.294 [2024-05-15 20:25:54.535977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:02.294 [2024-05-15 20:25:54.535985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:02.294 [2024-05-15 20:25:54.535999] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.294 [2024-05-15 20:25:54.545208] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:02.294 [2024-05-15 20:25:54.545605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.294 [2024-05-15 20:25:54.545912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.294 [2024-05-15 20:25:54.545923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb5e40 with addr=10.0.0.2, port=4420 00:34:02.294 [2024-05-15 20:25:54.545930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb5e40 is same with the state(5) to be set 00:34:02.294 [2024-05-15 20:25:54.545941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb5e40 (9): Bad file descriptor 00:34:02.294 [2024-05-15 20:25:54.545952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:02.294 [2024-05-15 20:25:54.545958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:02.294 [2024-05-15 20:25:54.545965] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:02.294 [2024-05-15 20:25:54.545975] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.294 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.294 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:02.294 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:02.294 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:02.294 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:02.294 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:02.294 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:02.294 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:34:02.294 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:02.294 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:02.294 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:02.294 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:02.294 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.294 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.295 [2024-05-15 20:25:54.555264] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:02.295 [2024-05-15 20:25:54.555666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.295 [2024-05-15 20:25:54.556070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.295 [2024-05-15 20:25:54.556080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb5e40 with addr=10.0.0.2, port=4420 00:34:02.295 [2024-05-15 20:25:54.556092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb5e40 is same with the state(5) to be set 00:34:02.295 [2024-05-15 20:25:54.556104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb5e40 (9): Bad file descriptor 00:34:02.295 [2024-05-15 20:25:54.556127] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:02.295 [2024-05-15 20:25:54.556134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:02.295 [2024-05-15 20:25:54.556141] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:02.295 [2024-05-15 20:25:54.556152] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.295 [2024-05-15 20:25:54.565323] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:02.295 [2024-05-15 20:25:54.565737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.295 [2024-05-15 20:25:54.565961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.295 [2024-05-15 20:25:54.565971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb5e40 with addr=10.0.0.2, port=4420 00:34:02.295 [2024-05-15 20:25:54.565979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb5e40 is same with the state(5) to be set 00:34:02.295 [2024-05-15 20:25:54.565990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb5e40 (9): Bad file descriptor 00:34:02.295 [2024-05-15 20:25:54.566009] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:02.295 [2024-05-15 20:25:54.566016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:02.295 [2024-05-15 20:25:54.566023] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:02.295 [2024-05-15 20:25:54.566034] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.295 [2024-05-15 20:25:54.575377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:02.295 [2024-05-15 20:25:54.575731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.295 [2024-05-15 20:25:54.576126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.295 [2024-05-15 20:25:54.576136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb5e40 with addr=10.0.0.2, port=4420 00:34:02.295 [2024-05-15 20:25:54.576143] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb5e40 is same with the state(5) to be set 00:34:02.295 [2024-05-15 20:25:54.576154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb5e40 (9): Bad file descriptor 00:34:02.295 [2024-05-15 20:25:54.576176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:02.295 [2024-05-15 20:25:54.576183] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:02.295 [2024-05-15 20:25:54.576190] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:02.295 [2024-05-15 20:25:54.576201] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.295 [2024-05-15 20:25:54.585427] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:02.295 [2024-05-15 20:25:54.585838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.295 [2024-05-15 20:25:54.586242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.295 [2024-05-15 20:25:54.586252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb5e40 with addr=10.0.0.2, port=4420 00:34:02.295 [2024-05-15 20:25:54.586259] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb5e40 is same with the state(5) to be set 00:34:02.295 [2024-05-15 20:25:54.586276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb5e40 (9): Bad file descriptor 00:34:02.295 [2024-05-15 20:25:54.586293] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:02.295 [2024-05-15 20:25:54.586300] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:02.295 [2024-05-15 20:25:54.586307] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:02.295 [2024-05-15 20:25:54.586322] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.295 [2024-05-15 20:25:54.591311] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:02.295 [2024-05-15 20:25:54.591332] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:02.295 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:34:02.296 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:02.296 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:02.296 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.296 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:02.296 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.296 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:02.296 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.556 20:25:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.495 [2024-05-15 20:25:55.869096] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:03.495 [2024-05-15 20:25:55.869111] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:03.495 [2024-05-15 20:25:55.869123] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:03.495 [2024-05-15 20:25:55.958406] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:03.756 [2024-05-15 20:25:56.227927] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:03.756 [2024-05-15 20:25:56.227957] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.756 request: 00:34:03.756 { 00:34:03.756 "name": "nvme", 00:34:03.756 "trtype": "tcp", 00:34:03.756 "traddr": "10.0.0.2", 00:34:03.756 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:03.756 "adrfam": "ipv4", 00:34:03.756 "trsvcid": "8009", 00:34:03.756 "wait_for_attach": true, 00:34:03.756 "method": "bdev_nvme_start_discovery", 00:34:03.756 "req_id": 1 00:34:03.756 } 00:34:03.756 Got JSON-RPC error response 00:34:03.756 response: 00:34:03.756 { 00:34:03.756 "code": -17, 00:34:03.756 "message": "File exists" 00:34:03.756 } 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.756 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:04.017 request: 00:34:04.017 { 00:34:04.017 "name": "nvme_second", 00:34:04.017 "trtype": "tcp", 00:34:04.017 "traddr": "10.0.0.2", 00:34:04.017 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:04.017 "adrfam": "ipv4", 00:34:04.017 "trsvcid": "8009", 00:34:04.017 "wait_for_attach": true, 00:34:04.017 "method": "bdev_nvme_start_discovery", 00:34:04.017 "req_id": 1 00:34:04.017 } 00:34:04.017 Got JSON-RPC error response 00:34:04.017 response: 00:34:04.017 { 00:34:04.017 "code": -17, 00:34:04.017 "message": "File exists" 00:34:04.017 } 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.017 20:25:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.400 [2024-05-15 20:25:57.488528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-05-15 20:25:57.488783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-05-15 20:25:57.488799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd2240 with addr=10.0.0.2, port=8010 00:34:05.400 [2024-05-15 20:25:57.488812] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:05.400 [2024-05-15 20:25:57.488822] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:05.400 [2024-05-15 20:25:57.488830] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:06.340 [2024-05-15 20:25:58.490839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.340 [2024-05-15 20:25:58.491137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.340 [2024-05-15 20:25:58.491148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd2240 with addr=10.0.0.2, port=8010 00:34:06.340 [2024-05-15 20:25:58.491160] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:06.340 [2024-05-15 20:25:58.491166] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:06.340 [2024-05-15 20:25:58.491174] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:07.281 [2024-05-15 20:25:59.492805] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:07.281 request: 00:34:07.281 { 00:34:07.281 "name": "nvme_second", 00:34:07.281 "trtype": "tcp", 00:34:07.281 "traddr": "10.0.0.2", 00:34:07.281 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:07.281 "adrfam": "ipv4", 00:34:07.281 "trsvcid": "8010", 00:34:07.281 "attach_timeout_ms": 3000, 00:34:07.281 "method": "bdev_nvme_start_discovery", 00:34:07.281 "req_id": 1 00:34:07.281 } 00:34:07.281 Got JSON-RPC error response 00:34:07.281 response: 00:34:07.281 { 00:34:07.281 "code": -110, 00:34:07.281 "message": "Connection timed out" 00:34:07.281 } 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 255517 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:07.281 rmmod nvme_tcp 00:34:07.281 rmmod nvme_fabrics 00:34:07.281 rmmod nvme_keyring 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 255212 ']' 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 255212 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 255212 ']' 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 255212 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 255212 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 255212' 00:34:07.281 killing process with pid 255212 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 255212 00:34:07.281 [2024-05-15 20:25:59.675804] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:07.281 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 255212 00:34:07.542 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:07.542 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:07.542 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:07.542 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:07.542 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:07.542 20:25:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.542 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:07.543 20:25:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.533 20:26:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:09.533 00:34:09.533 real 0m21.499s 00:34:09.533 user 0m24.223s 00:34:09.533 sys 0m7.560s 00:34:09.533 20:26:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:09.533 20:26:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.533 ************************************ 00:34:09.533 END TEST nvmf_host_discovery 00:34:09.533 ************************************ 00:34:09.533 20:26:01 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:09.533 20:26:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:09.533 20:26:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:09.533 20:26:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:09.533 ************************************ 00:34:09.533 START TEST nvmf_host_multipath_status 00:34:09.533 ************************************ 00:34:09.533 20:26:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:09.794 * Looking for test storage... 00:34:09.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:09.794 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:09.795 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:09.795 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:09.795 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:09.795 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:09.795 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:09.795 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:09.795 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.795 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:09.795 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.795 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:09.795 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:09.795 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:34:09.795 20:26:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:17.936 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:17.936 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:17.936 Found net devices under 0000:31:00.0: cvl_0_0 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:17.936 Found net devices under 0000:31:00.1: cvl_0_1 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:17.936 20:26:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:17.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:17.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.735 ms 00:34:17.936 00:34:17.936 --- 10.0.0.2 ping statistics --- 00:34:17.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.936 rtt min/avg/max/mdev = 0.735/0.735/0.735/0.000 ms 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:17.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:17.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:34:17.936 00:34:17.936 --- 10.0.0.1 ping statistics --- 00:34:17.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.936 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:17.936 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:17.937 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:17.937 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:17.937 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:17.937 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=262095 00:34:17.937 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 262095 00:34:17.937 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:17.937 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 262095 ']' 00:34:17.937 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.937 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:17.937 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.937 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:17.937 20:26:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:18.198 [2024-05-15 20:26:10.436687] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:34:18.198 [2024-05-15 20:26:10.436743] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:18.198 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.198 [2024-05-15 20:26:10.529056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:18.198 [2024-05-15 20:26:10.623968] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:18.198 [2024-05-15 20:26:10.624027] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:18.198 [2024-05-15 20:26:10.624036] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:18.198 [2024-05-15 20:26:10.624044] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:18.198 [2024-05-15 20:26:10.624050] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:18.198 [2024-05-15 20:26:10.624178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:18.198 [2024-05-15 20:26:10.624184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.139 20:26:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:19.140 20:26:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:34:19.140 20:26:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:19.140 20:26:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:19.140 20:26:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:19.140 20:26:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:19.140 20:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=262095 00:34:19.140 20:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:19.140 [2024-05-15 20:26:11.530416] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.140 20:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:19.401 Malloc0 00:34:19.401 20:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:19.661 20:26:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:19.921 20:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:19.921 [2024-05-15 20:26:12.373438] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:19.921 [2024-05-15 20:26:12.373660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.921 20:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:20.181 [2024-05-15 20:26:12.578155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:20.181 20:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=262461 00:34:20.181 20:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:20.181 20:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:20.181 20:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 262461 /var/tmp/bdevperf.sock 00:34:20.181 20:26:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 262461 ']' 00:34:20.181 20:26:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:20.181 20:26:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:20.181 20:26:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:20.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:20.181 20:26:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:20.181 20:26:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:20.441 20:26:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:20.441 20:26:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:34:20.441 20:26:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:20.702 20:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:21.272 Nvme0n1 00:34:21.272 20:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:21.534 Nvme0n1 00:34:21.534 20:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:21.534 20:26:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:24.079 20:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:24.079 20:26:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:24.079 20:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:24.079 20:26:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:25.019 20:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:25.019 20:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:25.019 20:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.019 20:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:25.280 20:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.280 20:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:25.280 20:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.280 20:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:25.540 20:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:25.540 20:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:25.540 20:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.540 20:26:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:25.800 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.800 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:25.800 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.800 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:25.800 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:25.800 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:25.800 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:25.800 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:26.061 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.061 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:26.061 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.061 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:26.321 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.321 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:26.321 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:26.581 20:26:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:26.841 20:26:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:27.783 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:27.783 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:27.783 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.783 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:28.043 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:28.043 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:28.043 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.043 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:28.043 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.043 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:28.304 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.304 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:28.304 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.304 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:28.304 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.304 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:28.565 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.565 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:28.565 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.565 20:26:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:28.826 20:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:28.826 20:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:28.826 20:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:28.826 20:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:29.086 20:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.086 20:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:29.086 20:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:29.346 20:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:29.607 20:26:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:30.558 20:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:30.558 20:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:30.558 20:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.558 20:26:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:30.822 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.822 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:30.822 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.822 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:30.822 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:30.822 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:30.822 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.822 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:31.081 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.081 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:31.081 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.081 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:31.342 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.342 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:31.342 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.342 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:31.603 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.603 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:31.603 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:31.603 20:26:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:31.864 20:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:31.864 20:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:31.864 20:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:32.124 20:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:32.124 20:26:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:33.065 20:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:33.065 20:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:33.326 20:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.326 20:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:33.326 20:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.326 20:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:33.326 20:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.326 20:26:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:33.586 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:33.586 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:33.586 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.586 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:33.846 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.846 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:33.846 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.846 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:34.166 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.166 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:34.166 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.166 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:34.166 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:34.166 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:34.166 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:34.166 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:34.425 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:34.425 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:34.425 20:26:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:34.685 20:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:34.945 20:26:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:35.885 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:35.886 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:35.886 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:35.886 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:36.146 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:36.146 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:36.146 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.146 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:36.406 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:36.406 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:36.406 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.406 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:36.406 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.406 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:36.406 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.406 20:26:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:36.667 20:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.667 20:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:36.667 20:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.667 20:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:36.927 20:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:36.927 20:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:36.927 20:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.927 20:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:37.186 20:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:37.186 20:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:37.186 20:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:37.446 20:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:37.706 20:26:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:38.648 20:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:38.648 20:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:38.648 20:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.648 20:26:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:38.908 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:38.908 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:38.908 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:38.908 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:38.908 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:38.908 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:39.167 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.167 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:39.167 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.167 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:39.167 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.167 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:39.428 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.428 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:39.428 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.428 20:26:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:39.689 20:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:39.689 20:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:39.689 20:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.689 20:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:39.950 20:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.950 20:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:40.210 20:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:40.210 20:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:40.470 20:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:40.470 20:26:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:41.852 20:26:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:41.852 20:26:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:41.852 20:26:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.852 20:26:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:41.852 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:41.852 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:41.852 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:41.852 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:42.113 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.113 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:42.113 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.113 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:42.113 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.113 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:42.113 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.113 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:42.373 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.373 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:42.373 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.373 20:26:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:42.633 20:26:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.633 20:26:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:42.633 20:26:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.633 20:26:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:42.893 20:26:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.893 20:26:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:42.893 20:26:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:43.154 20:26:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:43.414 20:26:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:44.357 20:26:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:44.357 20:26:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:44.357 20:26:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.357 20:26:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:44.617 20:26:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:44.617 20:26:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:44.617 20:26:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.617 20:26:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:44.878 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.878 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:44.878 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.878 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:44.878 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.878 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:44.878 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.878 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:45.138 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.138 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:45.138 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.138 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:45.398 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.398 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:45.398 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.398 20:26:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:45.659 20:26:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.659 20:26:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:45.659 20:26:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:45.919 20:26:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:45.919 20:26:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:47.304 20:26:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:47.304 20:26:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:47.304 20:26:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.304 20:26:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:47.304 20:26:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.304 20:26:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:47.304 20:26:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.304 20:26:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:47.581 20:26:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.581 20:26:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:47.581 20:26:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.581 20:26:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:47.952 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.952 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:47.952 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.952 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:47.952 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.952 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:47.952 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.952 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:48.213 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.213 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:48.213 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.213 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:48.473 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.473 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:48.473 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:48.473 20:26:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:48.734 20:26:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:50.117 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:50.117 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:50.117 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.117 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:50.117 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.117 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:50.117 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.117 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:50.377 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:50.377 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:50.377 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.377 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:50.377 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.377 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:50.377 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.377 20:26:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:50.637 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.637 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:50.637 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.637 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:50.898 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.898 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:50.898 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.898 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:51.158 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:51.158 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 262461 00:34:51.158 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 262461 ']' 00:34:51.158 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 262461 00:34:51.158 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:34:51.158 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:51.158 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 262461 00:34:51.158 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:34:51.158 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:34:51.158 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 262461' 00:34:51.158 killing process with pid 262461 00:34:51.158 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 262461 00:34:51.158 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 262461 00:34:51.158 Connection closed with partial response: 00:34:51.158 00:34:51.158 00:34:51.437 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 262461 00:34:51.437 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:51.437 [2024-05-15 20:26:12.638913] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:34:51.437 [2024-05-15 20:26:12.638969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262461 ] 00:34:51.437 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.437 [2024-05-15 20:26:12.694537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.437 [2024-05-15 20:26:12.746317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:51.437 Running I/O for 90 seconds... 00:34:51.437 [2024-05-15 20:26:26.994890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.437 [2024-05-15 20:26:26.994925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:51.437 [2024-05-15 20:26:26.994943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.437 [2024-05-15 20:26:26.994949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:51.437 [2024-05-15 20:26:26.994960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.437 [2024-05-15 20:26:26.994965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:51.437 [2024-05-15 20:26:26.994975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.437 [2024-05-15 20:26:26.994980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.994990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.994995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.995006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.995010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.995021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.995025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.995036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.995041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.995168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.995179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.995190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.995197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.995207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.995218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.995228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.995233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.995243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.995249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.995261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.995266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.995278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.995284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.995294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.995302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.996837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.996846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.996858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.996863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.996874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.996879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.996890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.996896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.996906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.996911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.996921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.996926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.996936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.996941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.996953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.996959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.996970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.996975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.996985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.996990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.997006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.997022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.997037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.997052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.997068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.997083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.997099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.997236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.997253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.997270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.997286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.997301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.438 [2024-05-15 20:26:26.997320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.997335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.997351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.997366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.438 [2024-05-15 20:26:26.997382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:51.438 [2024-05-15 20:26:26.997392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.997946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.997952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.998250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.998258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.998269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.998274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.998285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.439 [2024-05-15 20:26:26.998289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:51.439 [2024-05-15 20:26:26.998300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.440 [2024-05-15 20:26:26.998744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.440 [2024-05-15 20:26:26.998759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.998985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.998991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.999112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.999120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.999131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.999138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.999148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.999154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.999164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.440 [2024-05-15 20:26:26.999170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:51.440 [2024-05-15 20:26:26.999180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:26.999184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:26.999199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:26.999214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:26.999230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:26.999244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:26.999884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:26.999901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:26.999918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:26.999933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:26.999948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:26.999964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:26.999974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:26.999980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:27.000114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:27.000122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:27.000133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:27.000138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:27.000149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:27.000154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:27.000164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:27.000169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:27.000179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:27.000183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:27.000194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.441 [2024-05-15 20:26:27.000199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:27.000211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:27.000216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:27.000226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:27.000231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:27.000241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:27.000246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:27.000257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:27.000262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:27.000272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:27.000277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:51.441 [2024-05-15 20:26:27.000287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.441 [2024-05-15 20:26:27.000293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.000303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.000309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.000323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.000329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.000339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.000344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.442 [2024-05-15 20:26:27.001537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.001993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.001998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.002008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.002013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.002024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.442 [2024-05-15 20:26:27.002029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:51.442 [2024-05-15 20:26:27.002039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.002044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.002054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.002059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.002069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.002074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.002086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.002091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.002101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.002106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.002116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.002122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.002133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.002138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.002148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.002153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.002163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.013797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.443 [2024-05-15 20:26:27.013802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.014154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.443 [2024-05-15 20:26:27.014165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:51.443 [2024-05-15 20:26:27.014178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.444 [2024-05-15 20:26:27.014569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:51.444 [2024-05-15 20:26:27.014795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.444 [2024-05-15 20:26:27.014800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.014811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.445 [2024-05-15 20:26:27.014816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.014826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.445 [2024-05-15 20:26:27.014832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.014842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.014847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.014858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.014863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.014873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.014878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.014888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.014893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.014903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.014908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.014919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.014924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.014934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.014940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.014952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.445 [2024-05-15 20:26:27.014957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.014968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.445 [2024-05-15 20:26:27.014973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.014983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.445 [2024-05-15 20:26:27.014989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.014999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.445 [2024-05-15 20:26:27.015004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.445 [2024-05-15 20:26:27.015019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.445 [2024-05-15 20:26:27.015176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:51.445 [2024-05-15 20:26:27.015952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.445 [2024-05-15 20:26:27.015961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.015973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.015979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.015989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.015994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:51.446 [2024-05-15 20:26:27.016488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.446 [2024-05-15 20:26:27.016493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.016503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.016508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.016519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.016524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.016534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.016539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.016549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.016555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.016565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.016570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.016580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.016586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.447 [2024-05-15 20:26:27.024647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.447 [2024-05-15 20:26:27.024662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.024991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.024997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.025008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.025013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.025024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.025030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:51.447 [2024-05-15 20:26:27.025040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.447 [2024-05-15 20:26:27.025045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.448 [2024-05-15 20:26:27.025674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.448 [2024-05-15 20:26:27.025690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:51.448 [2024-05-15 20:26:27.025700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.025705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.025716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.025721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.025733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.025738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.025749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.025754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.025764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.025770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.025780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.025785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.025796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.025801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.025812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.025816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.025827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.025833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.025843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.025849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.025859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.025864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.025876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.025881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.025892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.025897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:51.449 [2024-05-15 20:26:27.026855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.449 [2024-05-15 20:26:27.026860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.026871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.026877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.026887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.026892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.026903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.026908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.026918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.026924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.026934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.026939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.026950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.026955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.026966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.026972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.026982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.026987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.026997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.450 [2024-05-15 20:26:27.027528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.450 [2024-05-15 20:26:27.027547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.450 [2024-05-15 20:26:27.027757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:51.450 [2024-05-15 20:26:27.027768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.027774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.027784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.027789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.027799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.027804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.027814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.027820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.027830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.027836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.027846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.027852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.027862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.027867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.027878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.027883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.027895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.027900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.027910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.027916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.027926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.027933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.027944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.027949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.027959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.027964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.027975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.027980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.027990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.027996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.028561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.028577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.028593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.028609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.028624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.028770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.451 [2024-05-15 20:26:27.028789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:51.451 [2024-05-15 20:26:27.028831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.451 [2024-05-15 20:26:27.028836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.028847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.452 [2024-05-15 20:26:27.028852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.029996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.452 [2024-05-15 20:26:27.030004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.452 [2024-05-15 20:26:27.030164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.452 [2024-05-15 20:26:27.030716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:51.452 [2024-05-15 20:26:27.030726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.030731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.030742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.030747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.030757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.030762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.030773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.030778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.030789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.030793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.030804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.030809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.030819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.030827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.030838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.030842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.030853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.030859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.030870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.030875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.030885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.030890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.030901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.030906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.030916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.030922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.030933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.030938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.030949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.030955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.030965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.035952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.035957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.036348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.036358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.036370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.036376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.036387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.036392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.036404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.453 [2024-05-15 20:26:27.036409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.036420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.453 [2024-05-15 20:26:27.036426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.036437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.036442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:51.453 [2024-05-15 20:26:27.036453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.453 [2024-05-15 20:26:27.036458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.036809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.036825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.036840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.036856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.036872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.036888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.036904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.036920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.036936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.036951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.036967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.036984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.036994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.036999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.037010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.037015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.037025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.037030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.037041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.037047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.037057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.037063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.037073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.454 [2024-05-15 20:26:27.037078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.454 [2024-05-15 20:26:27.037089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.454 [2024-05-15 20:26:27.037094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.455 [2024-05-15 20:26:27.037209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.455 [2024-05-15 20:26:27.037223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.455 [2024-05-15 20:26:27.037239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.455 [2024-05-15 20:26:27.037255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.455 [2024-05-15 20:26:27.037270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.455 [2024-05-15 20:26:27.037435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.037511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.037516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.038011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.038019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.038032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.038037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.038048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.038054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.038064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.038070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.038081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.038086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.038097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.038102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.038113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.038118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.038129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.455 [2024-05-15 20:26:27.038134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:51.455 [2024-05-15 20:26:27.038145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.038984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.038995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.039000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.039012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.039017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.039029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.456 [2024-05-15 20:26:27.039035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:51.456 [2024-05-15 20:26:27.039045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.457 [2024-05-15 20:26:27.039309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.457 [2024-05-15 20:26:27.039334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.457 [2024-05-15 20:26:27.039716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.457 [2024-05-15 20:26:27.039732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.457 [2024-05-15 20:26:27.039747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.457 [2024-05-15 20:26:27.039763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:51.457 [2024-05-15 20:26:27.039773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.457 [2024-05-15 20:26:27.039778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.040235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.040252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.040268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.040284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.040300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.040469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.040486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.040547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.040552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.041626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.041643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.041659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.041675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.041691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.041707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.041723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.041739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.041757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.041773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.458 [2024-05-15 20:26:27.041789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.041805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.041821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.041836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.041852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.458 [2024-05-15 20:26:27.041868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:51.458 [2024-05-15 20:26:27.041879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.041884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.459 [2024-05-15 20:26:27.042537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:51.459 [2024-05-15 20:26:27.042547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.042836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.042842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.460 [2024-05-15 20:26:27.043261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.460 [2024-05-15 20:26:27.043277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.460 [2024-05-15 20:26:27.043488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:51.460 [2024-05-15 20:26:27.043498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.043503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.043514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.043519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.043529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.043534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.043545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.043550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.043560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.043566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.043578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.043584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.043594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.043600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.043610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.043616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.043626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.043632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.043642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.043647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.043658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.043663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.043673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.043679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.043689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.043694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.043705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.043711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.043936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.043944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.043955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.043960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.044246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.044263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.044280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.044296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.044311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.044470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.044487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.044545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.044550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.045681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.461 [2024-05-15 20:26:27.045688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:51.461 [2024-05-15 20:26:27.045699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.461 [2024-05-15 20:26:27.045704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.045714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.045721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.045731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.045736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.045746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.045751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.045762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.045766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.045777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.045781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.045792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.045796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.045807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.045811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.045822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.045826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.045837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.462 [2024-05-15 20:26:27.045842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.462 [2024-05-15 20:26:27.485824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:51.462 [2024-05-15 20:26:27.485835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.485840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.485851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.485857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.485869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.485874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.485885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.485890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.485901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.485907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.485918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.485923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.485935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.485939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.485951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.485957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.485968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.485974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.485985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.485991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.463 [2024-05-15 20:26:27.486405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.463 [2024-05-15 20:26:27.486421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.463 [2024-05-15 20:26:27.486507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:51.463 [2024-05-15 20:26:27.486519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.464 [2024-05-15 20:26:27.486828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.486845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.486862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.486878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.486894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.486911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.486927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.486943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.486960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.486978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.486990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.486995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.487006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.487012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.487024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.487030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.487040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.487047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.487058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.487063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.487075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.487081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.487092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.487097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.464 [2024-05-15 20:26:27.487109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.464 [2024-05-15 20:26:27.487114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:27.487126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.465 [2024-05-15 20:26:27.487131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:27.487142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.465 [2024-05-15 20:26:27.487148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:27.487159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.465 [2024-05-15 20:26:27.487165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:27.487177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.465 [2024-05-15 20:26:27.487182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:27.487194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.465 [2024-05-15 20:26:27.487200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:27.487211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.465 [2024-05-15 20:26:27.487217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:27.487228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.465 [2024-05-15 20:26:27.487233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:27.487244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:27.487251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:27.487262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:27.487268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:27.487280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:27.487284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:27.487459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:27.487467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.465 [2024-05-15 20:26:41.146266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.465 [2024-05-15 20:26:41.146304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.146682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.146687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.147201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.147217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.147229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.147234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.147244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.147250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:51.465 [2024-05-15 20:26:41.147260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.465 [2024-05-15 20:26:41.147266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.466 [2024-05-15 20:26:41.147608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.466 [2024-05-15 20:26:41.147624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.466 [2024-05-15 20:26:41.147639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:51.466 [2024-05-15 20:26:41.147654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.147746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.147751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.148020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.148030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.148041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.148046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.148057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.148062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.148072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.148077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.148087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.148092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.148102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.148108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.148118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.148123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.148133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.148138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.148148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.148154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:51.466 [2024-05-15 20:26:41.148164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:51.466 [2024-05-15 20:26:41.148169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:51.467 Received shutdown signal, test time was about 29.434440 seconds 00:34:51.467 00:34:51.467 Latency(us) 00:34:51.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:51.467 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:51.467 Verification LBA range: start 0x0 length 0x4000 00:34:51.467 Nvme0n1 : 29.43 9328.28 36.44 0.00 0.00 13701.89 384.00 3509234.35 00:34:51.467 =================================================================================================================== 00:34:51.467 Total : 9328.28 36.44 0.00 0.00 13701.89 384.00 3509234.35 00:34:51.467 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:51.467 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:51.467 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:51.467 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:51.467 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:51.467 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:34:51.467 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:51.467 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:34:51.467 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:51.467 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:51.467 rmmod nvme_tcp 00:34:51.467 rmmod nvme_fabrics 00:34:51.727 rmmod nvme_keyring 00:34:51.727 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:51.727 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:34:51.727 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:34:51.727 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 262095 ']' 00:34:51.727 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 262095 00:34:51.727 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 262095 ']' 00:34:51.727 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 262095 00:34:51.727 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:34:51.727 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:51.727 20:26:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 262095 00:34:51.727 20:26:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:51.727 20:26:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:51.727 20:26:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 262095' 00:34:51.727 killing process with pid 262095 00:34:51.727 20:26:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 262095 00:34:51.727 [2024-05-15 20:26:44.015762] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:51.727 20:26:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 262095 00:34:51.727 20:26:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:51.727 20:26:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:51.727 20:26:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:51.727 20:26:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:51.727 20:26:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:51.727 20:26:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.727 20:26:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:51.727 20:26:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.271 20:26:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:54.271 00:34:54.271 real 0m44.277s 00:34:54.271 user 1m56.014s 00:34:54.271 sys 0m12.129s 00:34:54.271 20:26:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:54.271 20:26:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:54.271 ************************************ 00:34:54.271 END TEST nvmf_host_multipath_status 00:34:54.271 ************************************ 00:34:54.272 20:26:46 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:54.272 20:26:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:54.272 20:26:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:54.272 20:26:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.272 ************************************ 00:34:54.272 START TEST nvmf_discovery_remove_ifc 00:34:54.272 ************************************ 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:54.272 * Looking for test storage... 00:34:54.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:34:54.272 20:26:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:02.412 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:02.412 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:02.412 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:02.413 Found net devices under 0000:31:00.0: cvl_0_0 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:02.413 Found net devices under 0000:31:00.1: cvl_0_1 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:02.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:02.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:35:02.413 00:35:02.413 --- 10.0.0.2 ping statistics --- 00:35:02.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.413 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:02.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:02.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:35:02.413 00:35:02.413 --- 10.0.0.1 ping statistics --- 00:35:02.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.413 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=273182 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 273182 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 273182 ']' 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.413 [2024-05-15 20:26:54.532958] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:35:02.413 [2024-05-15 20:26:54.533008] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:02.413 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.413 [2024-05-15 20:26:54.603586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.413 [2024-05-15 20:26:54.666664] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:02.413 [2024-05-15 20:26:54.666700] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:02.413 [2024-05-15 20:26:54.666707] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:02.413 [2024-05-15 20:26:54.666713] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:02.413 [2024-05-15 20:26:54.666719] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:02.413 [2024-05-15 20:26:54.666742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.413 [2024-05-15 20:26:54.807364] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.413 [2024-05-15 20:26:54.815328] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:02.413 [2024-05-15 20:26:54.815529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:02.413 null0 00:35:02.413 [2024-05-15 20:26:54.847519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=273371 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 273371 /tmp/host.sock 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 273371 ']' 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:02.413 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:02.413 20:26:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.673 [2024-05-15 20:26:54.915519] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:35:02.673 [2024-05-15 20:26:54.915564] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273371 ] 00:35:02.673 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.673 [2024-05-15 20:26:54.997901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.673 [2024-05-15 20:26:55.062939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.613 20:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:03.613 20:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:35:03.613 20:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:03.613 20:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:03.613 20:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.613 20:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:03.613 20:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.613 20:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:03.613 20:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.613 20:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:03.613 20:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.613 20:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:03.613 20:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.613 20:26:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:04.553 [2024-05-15 20:26:56.826166] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:04.553 [2024-05-15 20:26:56.826190] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:04.553 [2024-05-15 20:26:56.826204] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:04.553 [2024-05-15 20:26:56.955695] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:04.553 [2024-05-15 20:26:57.016032] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:04.553 [2024-05-15 20:26:57.016079] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:04.553 [2024-05-15 20:26:57.016103] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:04.553 [2024-05-15 20:26:57.016117] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:04.553 [2024-05-15 20:26:57.016136] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:04.553 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.553 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:04.553 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:04.553 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:04.553 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:04.553 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.553 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:04.553 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:04.553 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:04.553 [2024-05-15 20:26:57.025623] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1215650 was disconnected and freed. delete nvme_qpair. 00:35:04.553 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.813 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:04.813 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:04.813 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:04.813 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:04.813 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:04.813 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:04.813 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:04.813 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:04.813 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:04.813 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:04.813 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:04.813 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:04.813 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:04.813 20:26:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:06.196 20:26:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:06.196 20:26:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:06.196 20:26:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:06.196 20:26:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.196 20:26:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:06.196 20:26:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:06.196 20:26:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:06.196 20:26:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.196 20:26:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:06.196 20:26:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:07.139 20:26:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:07.139 20:26:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:07.139 20:26:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:07.139 20:26:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:07.139 20:26:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.139 20:26:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:07.139 20:26:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:07.139 20:26:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.139 20:26:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:07.139 20:26:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:08.075 20:27:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:08.075 20:27:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:08.075 20:27:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:08.075 20:27:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:08.075 20:27:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.075 20:27:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:08.075 20:27:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:08.075 20:27:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.076 20:27:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:08.076 20:27:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:09.014 20:27:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:09.014 20:27:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:09.014 20:27:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:09.015 20:27:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:09.015 20:27:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.015 20:27:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:09.015 20:27:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:09.015 20:27:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.015 20:27:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:09.015 20:27:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:10.394 [2024-05-15 20:27:02.456631] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:10.394 [2024-05-15 20:27:02.456676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.394 [2024-05-15 20:27:02.456689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.394 [2024-05-15 20:27:02.456703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.394 [2024-05-15 20:27:02.456711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.394 [2024-05-15 20:27:02.456718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.395 [2024-05-15 20:27:02.456726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.395 [2024-05-15 20:27:02.456733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.395 [2024-05-15 20:27:02.456741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.395 [2024-05-15 20:27:02.456749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:10.395 [2024-05-15 20:27:02.456756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.395 [2024-05-15 20:27:02.456763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc9c0 is same with the state(5) to be set 00:35:10.395 [2024-05-15 20:27:02.466653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dc9c0 (9): Bad file descriptor 00:35:10.395 [2024-05-15 20:27:02.476694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:10.395 20:27:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:10.395 20:27:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:10.395 20:27:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:10.395 20:27:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.395 20:27:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:10.395 20:27:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:10.395 20:27:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:11.334 [2024-05-15 20:27:03.538409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:12.274 [2024-05-15 20:27:04.562398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:12.274 [2024-05-15 20:27:04.562486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc9c0 with addr=10.0.0.2, port=4420 00:35:12.274 [2024-05-15 20:27:04.562518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc9c0 is same with the state(5) to be set 00:35:12.274 [2024-05-15 20:27:04.563565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dc9c0 (9): Bad file descriptor 00:35:12.274 [2024-05-15 20:27:04.563629] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:12.274 [2024-05-15 20:27:04.563679] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:12.274 [2024-05-15 20:27:04.563735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:12.274 [2024-05-15 20:27:04.563763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.274 [2024-05-15 20:27:04.563791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:12.274 [2024-05-15 20:27:04.563812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.274 [2024-05-15 20:27:04.563848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:12.274 [2024-05-15 20:27:04.563869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.274 [2024-05-15 20:27:04.563893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:12.274 [2024-05-15 20:27:04.563913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.274 [2024-05-15 20:27:04.563937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:12.274 [2024-05-15 20:27:04.563958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:12.274 [2024-05-15 20:27:04.563980] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:35:12.274 [2024-05-15 20:27:04.564008] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dbe50 (9): Bad file descriptor 00:35:12.274 [2024-05-15 20:27:04.564659] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:12.274 [2024-05-15 20:27:04.564693] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:35:12.274 20:27:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.274 20:27:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:12.274 20:27:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:13.216 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:13.216 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:13.216 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:13.216 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.216 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:13.216 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:13.216 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:13.216 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.216 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:13.216 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:13.216 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:13.475 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:13.475 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:13.475 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:13.475 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.475 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:13.475 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:13.475 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:13.476 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:13.476 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.476 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:13.476 20:27:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:14.416 [2024-05-15 20:27:06.622502] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:14.416 [2024-05-15 20:27:06.622523] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:14.416 [2024-05-15 20:27:06.622538] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:14.416 [2024-05-15 20:27:06.711811] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:14.416 20:27:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:14.416 20:27:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:14.416 20:27:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:14.416 20:27:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:14.416 20:27:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.417 20:27:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:14.417 20:27:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:14.417 20:27:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.417 20:27:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:14.417 20:27:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:14.417 [2024-05-15 20:27:06.895192] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:14.417 [2024-05-15 20:27:06.895232] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:14.417 [2024-05-15 20:27:06.895251] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:14.417 [2024-05-15 20:27:06.895266] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:14.417 [2024-05-15 20:27:06.895274] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:14.676 [2024-05-15 20:27:06.941102] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x121fd90 was disconnected and freed. delete nvme_qpair. 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 273371 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 273371 ']' 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 273371 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 273371 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 273371' 00:35:15.618 killing process with pid 273371 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 273371 00:35:15.618 20:27:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 273371 00:35:15.618 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:15.618 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:15.618 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:35:15.618 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:15.618 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:35:15.618 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:15.618 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:15.879 rmmod nvme_tcp 00:35:15.879 rmmod nvme_fabrics 00:35:15.879 rmmod nvme_keyring 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 273182 ']' 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 273182 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 273182 ']' 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 273182 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 273182 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 273182' 00:35:15.879 killing process with pid 273182 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 273182 00:35:15.879 [2024-05-15 20:27:08.241089] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 273182 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:15.879 20:27:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:18.427 20:27:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:18.427 00:35:18.427 real 0m24.125s 00:35:18.427 user 0m27.863s 00:35:18.427 sys 0m7.046s 00:35:18.427 20:27:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:18.427 20:27:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:18.427 ************************************ 00:35:18.427 END TEST nvmf_discovery_remove_ifc 00:35:18.427 ************************************ 00:35:18.427 20:27:10 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:18.427 20:27:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:18.427 20:27:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:18.427 20:27:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:18.427 ************************************ 00:35:18.427 START TEST nvmf_identify_kernel_target 00:35:18.427 ************************************ 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:18.427 * Looking for test storage... 00:35:18.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:18.427 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:35:18.428 20:27:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:26.572 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:26.572 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:26.572 Found net devices under 0000:31:00.0: cvl_0_0 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:26.572 Found net devices under 0000:31:00.1: cvl_0_1 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:26.572 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:26.573 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:26.573 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:26.573 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:26.573 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:26.573 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:26.573 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:26.573 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:26.573 20:27:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:26.573 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:26.573 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:26.573 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:26.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:26.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:35:26.573 00:35:26.573 --- 10.0.0.2 ping statistics --- 00:35:26.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.573 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:35:26.573 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:26.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:26.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.464 ms 00:35:26.834 00:35:26.834 --- 10.0.0.1 ping statistics --- 00:35:26.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.834 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:26.834 20:27:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:31.185 Waiting for block devices as requested 00:35:31.185 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:31.185 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:31.185 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:31.185 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:31.185 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:31.185 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:31.185 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:31.185 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:31.185 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:31.446 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:31.446 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:31.706 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:31.706 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:31.706 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:31.966 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:31.966 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:31.966 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:32.227 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:32.227 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:32.227 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:32.227 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:35:32.227 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:32.227 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:35:32.227 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:32.227 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:32.227 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:32.488 No valid GPT data, bailing 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:32.488 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:35:32.488 00:35:32.488 Discovery Log Number of Records 2, Generation counter 2 00:35:32.488 =====Discovery Log Entry 0====== 00:35:32.488 trtype: tcp 00:35:32.488 adrfam: ipv4 00:35:32.488 subtype: current discovery subsystem 00:35:32.488 treq: not specified, sq flow control disable supported 00:35:32.488 portid: 1 00:35:32.488 trsvcid: 4420 00:35:32.488 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:32.488 traddr: 10.0.0.1 00:35:32.488 eflags: none 00:35:32.488 sectype: none 00:35:32.488 =====Discovery Log Entry 1====== 00:35:32.488 trtype: tcp 00:35:32.488 adrfam: ipv4 00:35:32.488 subtype: nvme subsystem 00:35:32.488 treq: not specified, sq flow control disable supported 00:35:32.488 portid: 1 00:35:32.488 trsvcid: 4420 00:35:32.488 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:32.488 traddr: 10.0.0.1 00:35:32.488 eflags: none 00:35:32.488 sectype: none 00:35:32.489 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:32.489 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:32.489 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.489 ===================================================== 00:35:32.489 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:32.489 ===================================================== 00:35:32.489 Controller Capabilities/Features 00:35:32.489 ================================ 00:35:32.489 Vendor ID: 0000 00:35:32.489 Subsystem Vendor ID: 0000 00:35:32.489 Serial Number: 857959a3010541fb2a21 00:35:32.489 Model Number: Linux 00:35:32.489 Firmware Version: 6.7.0-68 00:35:32.489 Recommended Arb Burst: 0 00:35:32.489 IEEE OUI Identifier: 00 00 00 00:35:32.489 Multi-path I/O 00:35:32.489 May have multiple subsystem ports: No 00:35:32.489 May have multiple controllers: No 00:35:32.489 Associated with SR-IOV VF: No 00:35:32.489 Max Data Transfer Size: Unlimited 00:35:32.489 Max Number of Namespaces: 0 00:35:32.489 Max Number of I/O Queues: 1024 00:35:32.489 NVMe Specification Version (VS): 1.3 00:35:32.489 NVMe Specification Version (Identify): 1.3 00:35:32.489 Maximum Queue Entries: 1024 00:35:32.489 Contiguous Queues Required: No 00:35:32.489 Arbitration Mechanisms Supported 00:35:32.489 Weighted Round Robin: Not Supported 00:35:32.489 Vendor Specific: Not Supported 00:35:32.489 Reset Timeout: 7500 ms 00:35:32.489 Doorbell Stride: 4 bytes 00:35:32.489 NVM Subsystem Reset: Not Supported 00:35:32.489 Command Sets Supported 00:35:32.489 NVM Command Set: Supported 00:35:32.489 Boot Partition: Not Supported 00:35:32.489 Memory Page Size Minimum: 4096 bytes 00:35:32.489 Memory Page Size Maximum: 4096 bytes 00:35:32.489 Persistent Memory Region: Not Supported 00:35:32.489 Optional Asynchronous Events Supported 00:35:32.489 Namespace Attribute Notices: Not Supported 00:35:32.489 Firmware Activation Notices: Not Supported 00:35:32.489 ANA Change Notices: Not Supported 00:35:32.489 PLE Aggregate Log Change Notices: Not Supported 00:35:32.489 LBA Status Info Alert Notices: Not Supported 00:35:32.489 EGE Aggregate Log Change Notices: Not Supported 00:35:32.489 Normal NVM Subsystem Shutdown event: Not Supported 00:35:32.489 Zone Descriptor Change Notices: Not Supported 00:35:32.489 Discovery Log Change Notices: Supported 00:35:32.489 Controller Attributes 00:35:32.489 128-bit Host Identifier: Not Supported 00:35:32.489 Non-Operational Permissive Mode: Not Supported 00:35:32.489 NVM Sets: Not Supported 00:35:32.489 Read Recovery Levels: Not Supported 00:35:32.489 Endurance Groups: Not Supported 00:35:32.489 Predictable Latency Mode: Not Supported 00:35:32.489 Traffic Based Keep ALive: Not Supported 00:35:32.489 Namespace Granularity: Not Supported 00:35:32.489 SQ Associations: Not Supported 00:35:32.489 UUID List: Not Supported 00:35:32.489 Multi-Domain Subsystem: Not Supported 00:35:32.489 Fixed Capacity Management: Not Supported 00:35:32.489 Variable Capacity Management: Not Supported 00:35:32.489 Delete Endurance Group: Not Supported 00:35:32.489 Delete NVM Set: Not Supported 00:35:32.489 Extended LBA Formats Supported: Not Supported 00:35:32.489 Flexible Data Placement Supported: Not Supported 00:35:32.489 00:35:32.489 Controller Memory Buffer Support 00:35:32.489 ================================ 00:35:32.489 Supported: No 00:35:32.489 00:35:32.489 Persistent Memory Region Support 00:35:32.489 ================================ 00:35:32.489 Supported: No 00:35:32.489 00:35:32.489 Admin Command Set Attributes 00:35:32.489 ============================ 00:35:32.489 Security Send/Receive: Not Supported 00:35:32.489 Format NVM: Not Supported 00:35:32.489 Firmware Activate/Download: Not Supported 00:35:32.489 Namespace Management: Not Supported 00:35:32.489 Device Self-Test: Not Supported 00:35:32.489 Directives: Not Supported 00:35:32.489 NVMe-MI: Not Supported 00:35:32.489 Virtualization Management: Not Supported 00:35:32.489 Doorbell Buffer Config: Not Supported 00:35:32.489 Get LBA Status Capability: Not Supported 00:35:32.489 Command & Feature Lockdown Capability: Not Supported 00:35:32.489 Abort Command Limit: 1 00:35:32.489 Async Event Request Limit: 1 00:35:32.489 Number of Firmware Slots: N/A 00:35:32.489 Firmware Slot 1 Read-Only: N/A 00:35:32.489 Firmware Activation Without Reset: N/A 00:35:32.489 Multiple Update Detection Support: N/A 00:35:32.489 Firmware Update Granularity: No Information Provided 00:35:32.489 Per-Namespace SMART Log: No 00:35:32.489 Asymmetric Namespace Access Log Page: Not Supported 00:35:32.489 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:32.489 Command Effects Log Page: Not Supported 00:35:32.489 Get Log Page Extended Data: Supported 00:35:32.489 Telemetry Log Pages: Not Supported 00:35:32.489 Persistent Event Log Pages: Not Supported 00:35:32.489 Supported Log Pages Log Page: May Support 00:35:32.489 Commands Supported & Effects Log Page: Not Supported 00:35:32.489 Feature Identifiers & Effects Log Page:May Support 00:35:32.489 NVMe-MI Commands & Effects Log Page: May Support 00:35:32.489 Data Area 4 for Telemetry Log: Not Supported 00:35:32.489 Error Log Page Entries Supported: 1 00:35:32.489 Keep Alive: Not Supported 00:35:32.489 00:35:32.489 NVM Command Set Attributes 00:35:32.489 ========================== 00:35:32.489 Submission Queue Entry Size 00:35:32.489 Max: 1 00:35:32.489 Min: 1 00:35:32.489 Completion Queue Entry Size 00:35:32.489 Max: 1 00:35:32.489 Min: 1 00:35:32.489 Number of Namespaces: 0 00:35:32.489 Compare Command: Not Supported 00:35:32.489 Write Uncorrectable Command: Not Supported 00:35:32.489 Dataset Management Command: Not Supported 00:35:32.489 Write Zeroes Command: Not Supported 00:35:32.489 Set Features Save Field: Not Supported 00:35:32.489 Reservations: Not Supported 00:35:32.489 Timestamp: Not Supported 00:35:32.489 Copy: Not Supported 00:35:32.489 Volatile Write Cache: Not Present 00:35:32.489 Atomic Write Unit (Normal): 1 00:35:32.489 Atomic Write Unit (PFail): 1 00:35:32.489 Atomic Compare & Write Unit: 1 00:35:32.489 Fused Compare & Write: Not Supported 00:35:32.489 Scatter-Gather List 00:35:32.489 SGL Command Set: Supported 00:35:32.489 SGL Keyed: Not Supported 00:35:32.489 SGL Bit Bucket Descriptor: Not Supported 00:35:32.489 SGL Metadata Pointer: Not Supported 00:35:32.489 Oversized SGL: Not Supported 00:35:32.489 SGL Metadata Address: Not Supported 00:35:32.489 SGL Offset: Supported 00:35:32.489 Transport SGL Data Block: Not Supported 00:35:32.489 Replay Protected Memory Block: Not Supported 00:35:32.489 00:35:32.489 Firmware Slot Information 00:35:32.489 ========================= 00:35:32.489 Active slot: 0 00:35:32.489 00:35:32.489 00:35:32.489 Error Log 00:35:32.489 ========= 00:35:32.489 00:35:32.489 Active Namespaces 00:35:32.489 ================= 00:35:32.489 Discovery Log Page 00:35:32.489 ================== 00:35:32.489 Generation Counter: 2 00:35:32.489 Number of Records: 2 00:35:32.489 Record Format: 0 00:35:32.489 00:35:32.489 Discovery Log Entry 0 00:35:32.489 ---------------------- 00:35:32.489 Transport Type: 3 (TCP) 00:35:32.489 Address Family: 1 (IPv4) 00:35:32.489 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:32.489 Entry Flags: 00:35:32.489 Duplicate Returned Information: 0 00:35:32.489 Explicit Persistent Connection Support for Discovery: 0 00:35:32.489 Transport Requirements: 00:35:32.489 Secure Channel: Not Specified 00:35:32.489 Port ID: 1 (0x0001) 00:35:32.489 Controller ID: 65535 (0xffff) 00:35:32.489 Admin Max SQ Size: 32 00:35:32.489 Transport Service Identifier: 4420 00:35:32.489 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:32.489 Transport Address: 10.0.0.1 00:35:32.489 Discovery Log Entry 1 00:35:32.489 ---------------------- 00:35:32.489 Transport Type: 3 (TCP) 00:35:32.489 Address Family: 1 (IPv4) 00:35:32.489 Subsystem Type: 2 (NVM Subsystem) 00:35:32.489 Entry Flags: 00:35:32.489 Duplicate Returned Information: 0 00:35:32.489 Explicit Persistent Connection Support for Discovery: 0 00:35:32.489 Transport Requirements: 00:35:32.489 Secure Channel: Not Specified 00:35:32.489 Port ID: 1 (0x0001) 00:35:32.489 Controller ID: 65535 (0xffff) 00:35:32.489 Admin Max SQ Size: 32 00:35:32.489 Transport Service Identifier: 4420 00:35:32.489 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:32.489 Transport Address: 10.0.0.1 00:35:32.489 20:27:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:32.752 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.752 get_feature(0x01) failed 00:35:32.752 get_feature(0x02) failed 00:35:32.752 get_feature(0x04) failed 00:35:32.752 ===================================================== 00:35:32.752 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:32.752 ===================================================== 00:35:32.752 Controller Capabilities/Features 00:35:32.752 ================================ 00:35:32.752 Vendor ID: 0000 00:35:32.752 Subsystem Vendor ID: 0000 00:35:32.752 Serial Number: c0dbd946739aa263a067 00:35:32.752 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:32.752 Firmware Version: 6.7.0-68 00:35:32.752 Recommended Arb Burst: 6 00:35:32.752 IEEE OUI Identifier: 00 00 00 00:35:32.752 Multi-path I/O 00:35:32.752 May have multiple subsystem ports: Yes 00:35:32.752 May have multiple controllers: Yes 00:35:32.752 Associated with SR-IOV VF: No 00:35:32.752 Max Data Transfer Size: Unlimited 00:35:32.752 Max Number of Namespaces: 1024 00:35:32.752 Max Number of I/O Queues: 128 00:35:32.752 NVMe Specification Version (VS): 1.3 00:35:32.752 NVMe Specification Version (Identify): 1.3 00:35:32.752 Maximum Queue Entries: 1024 00:35:32.752 Contiguous Queues Required: No 00:35:32.752 Arbitration Mechanisms Supported 00:35:32.752 Weighted Round Robin: Not Supported 00:35:32.752 Vendor Specific: Not Supported 00:35:32.752 Reset Timeout: 7500 ms 00:35:32.752 Doorbell Stride: 4 bytes 00:35:32.752 NVM Subsystem Reset: Not Supported 00:35:32.752 Command Sets Supported 00:35:32.752 NVM Command Set: Supported 00:35:32.752 Boot Partition: Not Supported 00:35:32.752 Memory Page Size Minimum: 4096 bytes 00:35:32.752 Memory Page Size Maximum: 4096 bytes 00:35:32.752 Persistent Memory Region: Not Supported 00:35:32.752 Optional Asynchronous Events Supported 00:35:32.752 Namespace Attribute Notices: Supported 00:35:32.752 Firmware Activation Notices: Not Supported 00:35:32.752 ANA Change Notices: Supported 00:35:32.752 PLE Aggregate Log Change Notices: Not Supported 00:35:32.752 LBA Status Info Alert Notices: Not Supported 00:35:32.752 EGE Aggregate Log Change Notices: Not Supported 00:35:32.752 Normal NVM Subsystem Shutdown event: Not Supported 00:35:32.752 Zone Descriptor Change Notices: Not Supported 00:35:32.752 Discovery Log Change Notices: Not Supported 00:35:32.752 Controller Attributes 00:35:32.752 128-bit Host Identifier: Supported 00:35:32.752 Non-Operational Permissive Mode: Not Supported 00:35:32.752 NVM Sets: Not Supported 00:35:32.752 Read Recovery Levels: Not Supported 00:35:32.752 Endurance Groups: Not Supported 00:35:32.752 Predictable Latency Mode: Not Supported 00:35:32.752 Traffic Based Keep ALive: Supported 00:35:32.752 Namespace Granularity: Not Supported 00:35:32.752 SQ Associations: Not Supported 00:35:32.752 UUID List: Not Supported 00:35:32.752 Multi-Domain Subsystem: Not Supported 00:35:32.752 Fixed Capacity Management: Not Supported 00:35:32.752 Variable Capacity Management: Not Supported 00:35:32.752 Delete Endurance Group: Not Supported 00:35:32.752 Delete NVM Set: Not Supported 00:35:32.752 Extended LBA Formats Supported: Not Supported 00:35:32.752 Flexible Data Placement Supported: Not Supported 00:35:32.752 00:35:32.752 Controller Memory Buffer Support 00:35:32.752 ================================ 00:35:32.752 Supported: No 00:35:32.752 00:35:32.752 Persistent Memory Region Support 00:35:32.752 ================================ 00:35:32.752 Supported: No 00:35:32.752 00:35:32.752 Admin Command Set Attributes 00:35:32.752 ============================ 00:35:32.752 Security Send/Receive: Not Supported 00:35:32.752 Format NVM: Not Supported 00:35:32.752 Firmware Activate/Download: Not Supported 00:35:32.752 Namespace Management: Not Supported 00:35:32.752 Device Self-Test: Not Supported 00:35:32.752 Directives: Not Supported 00:35:32.752 NVMe-MI: Not Supported 00:35:32.752 Virtualization Management: Not Supported 00:35:32.752 Doorbell Buffer Config: Not Supported 00:35:32.752 Get LBA Status Capability: Not Supported 00:35:32.752 Command & Feature Lockdown Capability: Not Supported 00:35:32.752 Abort Command Limit: 4 00:35:32.752 Async Event Request Limit: 4 00:35:32.752 Number of Firmware Slots: N/A 00:35:32.752 Firmware Slot 1 Read-Only: N/A 00:35:32.752 Firmware Activation Without Reset: N/A 00:35:32.752 Multiple Update Detection Support: N/A 00:35:32.752 Firmware Update Granularity: No Information Provided 00:35:32.752 Per-Namespace SMART Log: Yes 00:35:32.752 Asymmetric Namespace Access Log Page: Supported 00:35:32.752 ANA Transition Time : 10 sec 00:35:32.752 00:35:32.753 Asymmetric Namespace Access Capabilities 00:35:32.753 ANA Optimized State : Supported 00:35:32.753 ANA Non-Optimized State : Supported 00:35:32.753 ANA Inaccessible State : Supported 00:35:32.753 ANA Persistent Loss State : Supported 00:35:32.753 ANA Change State : Supported 00:35:32.753 ANAGRPID is not changed : No 00:35:32.753 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:32.753 00:35:32.753 ANA Group Identifier Maximum : 128 00:35:32.753 Number of ANA Group Identifiers : 128 00:35:32.753 Max Number of Allowed Namespaces : 1024 00:35:32.753 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:32.753 Command Effects Log Page: Supported 00:35:32.753 Get Log Page Extended Data: Supported 00:35:32.753 Telemetry Log Pages: Not Supported 00:35:32.753 Persistent Event Log Pages: Not Supported 00:35:32.753 Supported Log Pages Log Page: May Support 00:35:32.753 Commands Supported & Effects Log Page: Not Supported 00:35:32.753 Feature Identifiers & Effects Log Page:May Support 00:35:32.753 NVMe-MI Commands & Effects Log Page: May Support 00:35:32.753 Data Area 4 for Telemetry Log: Not Supported 00:35:32.753 Error Log Page Entries Supported: 128 00:35:32.753 Keep Alive: Supported 00:35:32.753 Keep Alive Granularity: 1000 ms 00:35:32.753 00:35:32.753 NVM Command Set Attributes 00:35:32.753 ========================== 00:35:32.753 Submission Queue Entry Size 00:35:32.753 Max: 64 00:35:32.753 Min: 64 00:35:32.753 Completion Queue Entry Size 00:35:32.753 Max: 16 00:35:32.753 Min: 16 00:35:32.753 Number of Namespaces: 1024 00:35:32.753 Compare Command: Not Supported 00:35:32.753 Write Uncorrectable Command: Not Supported 00:35:32.753 Dataset Management Command: Supported 00:35:32.753 Write Zeroes Command: Supported 00:35:32.753 Set Features Save Field: Not Supported 00:35:32.753 Reservations: Not Supported 00:35:32.753 Timestamp: Not Supported 00:35:32.753 Copy: Not Supported 00:35:32.753 Volatile Write Cache: Present 00:35:32.753 Atomic Write Unit (Normal): 1 00:35:32.753 Atomic Write Unit (PFail): 1 00:35:32.753 Atomic Compare & Write Unit: 1 00:35:32.753 Fused Compare & Write: Not Supported 00:35:32.753 Scatter-Gather List 00:35:32.753 SGL Command Set: Supported 00:35:32.753 SGL Keyed: Not Supported 00:35:32.753 SGL Bit Bucket Descriptor: Not Supported 00:35:32.753 SGL Metadata Pointer: Not Supported 00:35:32.753 Oversized SGL: Not Supported 00:35:32.753 SGL Metadata Address: Not Supported 00:35:32.753 SGL Offset: Supported 00:35:32.753 Transport SGL Data Block: Not Supported 00:35:32.753 Replay Protected Memory Block: Not Supported 00:35:32.753 00:35:32.753 Firmware Slot Information 00:35:32.753 ========================= 00:35:32.753 Active slot: 0 00:35:32.753 00:35:32.753 Asymmetric Namespace Access 00:35:32.753 =========================== 00:35:32.753 Change Count : 0 00:35:32.753 Number of ANA Group Descriptors : 1 00:35:32.753 ANA Group Descriptor : 0 00:35:32.753 ANA Group ID : 1 00:35:32.753 Number of NSID Values : 1 00:35:32.753 Change Count : 0 00:35:32.753 ANA State : 1 00:35:32.753 Namespace Identifier : 1 00:35:32.753 00:35:32.753 Commands Supported and Effects 00:35:32.753 ============================== 00:35:32.753 Admin Commands 00:35:32.753 -------------- 00:35:32.753 Get Log Page (02h): Supported 00:35:32.753 Identify (06h): Supported 00:35:32.753 Abort (08h): Supported 00:35:32.753 Set Features (09h): Supported 00:35:32.753 Get Features (0Ah): Supported 00:35:32.753 Asynchronous Event Request (0Ch): Supported 00:35:32.753 Keep Alive (18h): Supported 00:35:32.753 I/O Commands 00:35:32.753 ------------ 00:35:32.753 Flush (00h): Supported 00:35:32.753 Write (01h): Supported LBA-Change 00:35:32.753 Read (02h): Supported 00:35:32.753 Write Zeroes (08h): Supported LBA-Change 00:35:32.753 Dataset Management (09h): Supported 00:35:32.753 00:35:32.753 Error Log 00:35:32.753 ========= 00:35:32.753 Entry: 0 00:35:32.753 Error Count: 0x3 00:35:32.753 Submission Queue Id: 0x0 00:35:32.753 Command Id: 0x5 00:35:32.753 Phase Bit: 0 00:35:32.753 Status Code: 0x2 00:35:32.753 Status Code Type: 0x0 00:35:32.753 Do Not Retry: 1 00:35:32.753 Error Location: 0x28 00:35:32.753 LBA: 0x0 00:35:32.753 Namespace: 0x0 00:35:32.753 Vendor Log Page: 0x0 00:35:32.753 ----------- 00:35:32.753 Entry: 1 00:35:32.753 Error Count: 0x2 00:35:32.753 Submission Queue Id: 0x0 00:35:32.753 Command Id: 0x5 00:35:32.753 Phase Bit: 0 00:35:32.753 Status Code: 0x2 00:35:32.753 Status Code Type: 0x0 00:35:32.753 Do Not Retry: 1 00:35:32.753 Error Location: 0x28 00:35:32.753 LBA: 0x0 00:35:32.753 Namespace: 0x0 00:35:32.753 Vendor Log Page: 0x0 00:35:32.753 ----------- 00:35:32.753 Entry: 2 00:35:32.753 Error Count: 0x1 00:35:32.753 Submission Queue Id: 0x0 00:35:32.753 Command Id: 0x4 00:35:32.753 Phase Bit: 0 00:35:32.753 Status Code: 0x2 00:35:32.753 Status Code Type: 0x0 00:35:32.753 Do Not Retry: 1 00:35:32.753 Error Location: 0x28 00:35:32.753 LBA: 0x0 00:35:32.753 Namespace: 0x0 00:35:32.753 Vendor Log Page: 0x0 00:35:32.753 00:35:32.753 Number of Queues 00:35:32.753 ================ 00:35:32.753 Number of I/O Submission Queues: 128 00:35:32.753 Number of I/O Completion Queues: 128 00:35:32.753 00:35:32.753 ZNS Specific Controller Data 00:35:32.753 ============================ 00:35:32.753 Zone Append Size Limit: 0 00:35:32.753 00:35:32.753 00:35:32.753 Active Namespaces 00:35:32.753 ================= 00:35:32.753 get_feature(0x05) failed 00:35:32.753 Namespace ID:1 00:35:32.753 Command Set Identifier: NVM (00h) 00:35:32.753 Deallocate: Supported 00:35:32.753 Deallocated/Unwritten Error: Not Supported 00:35:32.753 Deallocated Read Value: Unknown 00:35:32.753 Deallocate in Write Zeroes: Not Supported 00:35:32.753 Deallocated Guard Field: 0xFFFF 00:35:32.753 Flush: Supported 00:35:32.753 Reservation: Not Supported 00:35:32.753 Namespace Sharing Capabilities: Multiple Controllers 00:35:32.753 Size (in LBAs): 3750748848 (1788GiB) 00:35:32.753 Capacity (in LBAs): 3750748848 (1788GiB) 00:35:32.753 Utilization (in LBAs): 3750748848 (1788GiB) 00:35:32.753 UUID: 03c2812e-18e1-47d8-88ba-0a284314cd8f 00:35:32.753 Thin Provisioning: Not Supported 00:35:32.753 Per-NS Atomic Units: Yes 00:35:32.753 Atomic Write Unit (Normal): 8 00:35:32.753 Atomic Write Unit (PFail): 8 00:35:32.753 Preferred Write Granularity: 8 00:35:32.753 Atomic Compare & Write Unit: 8 00:35:32.753 Atomic Boundary Size (Normal): 0 00:35:32.753 Atomic Boundary Size (PFail): 0 00:35:32.753 Atomic Boundary Offset: 0 00:35:32.753 NGUID/EUI64 Never Reused: No 00:35:32.753 ANA group ID: 1 00:35:32.753 Namespace Write Protected: No 00:35:32.753 Number of LBA Formats: 1 00:35:32.753 Current LBA Format: LBA Format #00 00:35:32.753 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:32.753 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:32.753 rmmod nvme_tcp 00:35:32.753 rmmod nvme_fabrics 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.753 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:32.754 20:27:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.297 20:27:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:35.297 20:27:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:35.297 20:27:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:35.297 20:27:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:35:35.297 20:27:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:35.297 20:27:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:35.297 20:27:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:35.297 20:27:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:35.297 20:27:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:35.297 20:27:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:35.297 20:27:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:38.602 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:38.602 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:38.602 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:38.602 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:38.602 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:38.602 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:38.602 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:38.602 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:38.602 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:38.602 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:38.861 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:38.861 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:38.861 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:38.861 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:38.861 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:38.861 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:38.861 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:35:39.121 00:35:39.121 real 0m20.995s 00:35:39.121 user 0m5.587s 00:35:39.121 sys 0m12.275s 00:35:39.121 20:27:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:39.121 20:27:31 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:39.121 ************************************ 00:35:39.121 END TEST nvmf_identify_kernel_target 00:35:39.121 ************************************ 00:35:39.121 20:27:31 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:39.121 20:27:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:39.121 20:27:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:39.121 20:27:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:39.121 ************************************ 00:35:39.121 START TEST nvmf_auth_host 00:35:39.121 ************************************ 00:35:39.121 20:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:39.383 * Looking for test storage... 00:35:39.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:35:39.383 20:27:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.518 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:47.518 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:35:47.518 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:47.518 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:47.518 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:47.518 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:47.518 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:47.518 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:35:47.518 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:47.518 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:35:47.518 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:47.519 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:47.519 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:47.519 Found net devices under 0000:31:00.0: cvl_0_0 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:47.519 Found net devices under 0000:31:00.1: cvl_0_1 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:47.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:47.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:35:47.519 00:35:47.519 --- 10.0.0.2 ping statistics --- 00:35:47.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.519 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:47.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:47.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:35:47.519 00:35:47.519 --- 10.0.0.1 ping statistics --- 00:35:47.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.519 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:47.519 20:27:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:47.520 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:47.520 20:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:47.520 20:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.520 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=289381 00:35:47.520 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 289381 00:35:47.520 20:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 289381 ']' 00:35:47.520 20:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.520 20:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:47.520 20:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.520 20:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:47.520 20:27:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.520 20:27:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=943e982b729030d633f90261aeb93674 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.KYs 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 943e982b729030d633f90261aeb93674 0 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 943e982b729030d633f90261aeb93674 0 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=943e982b729030d633f90261aeb93674 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.KYs 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.KYs 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.KYs 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=510520b92650e5b2f7d77a07102430f2d9bba2520b46ac59d6982a66c864297a 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.b0J 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 510520b92650e5b2f7d77a07102430f2d9bba2520b46ac59d6982a66c864297a 3 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 510520b92650e5b2f7d77a07102430f2d9bba2520b46ac59d6982a66c864297a 3 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=510520b92650e5b2f7d77a07102430f2d9bba2520b46ac59d6982a66c864297a 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:48.463 20:27:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.b0J 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.b0J 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.b0J 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=79b023972494e525ed3ff84dec86e60bd2b5c4c01202ba7b 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.hPp 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 79b023972494e525ed3ff84dec86e60bd2b5c4c01202ba7b 0 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 79b023972494e525ed3ff84dec86e60bd2b5c4c01202ba7b 0 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:48.724 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=79b023972494e525ed3ff84dec86e60bd2b5c4c01202ba7b 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.hPp 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.hPp 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.hPp 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2993d01c6a1528c26757171d9a0bb31a008baf2c229820ca 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Rnt 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2993d01c6a1528c26757171d9a0bb31a008baf2c229820ca 2 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2993d01c6a1528c26757171d9a0bb31a008baf2c229820ca 2 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2993d01c6a1528c26757171d9a0bb31a008baf2c229820ca 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Rnt 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Rnt 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Rnt 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5f469de5be0f81f98b49553c1e5e4bda 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.XeQ 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5f469de5be0f81f98b49553c1e5e4bda 1 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5f469de5be0f81f98b49553c1e5e4bda 1 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5f469de5be0f81f98b49553c1e5e4bda 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:48.725 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.XeQ 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.XeQ 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.XeQ 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=637bcef9c0423f5d24baead24bc26d98 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.l18 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 637bcef9c0423f5d24baead24bc26d98 1 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 637bcef9c0423f5d24baead24bc26d98 1 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=637bcef9c0423f5d24baead24bc26d98 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.l18 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.l18 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.l18 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6d687d960d6c38ff84058f3f8749e9af9b0438e4adc640f1 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.13V 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6d687d960d6c38ff84058f3f8749e9af9b0438e4adc640f1 2 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6d687d960d6c38ff84058f3f8749e9af9b0438e4adc640f1 2 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6d687d960d6c38ff84058f3f8749e9af9b0438e4adc640f1 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.13V 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.13V 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.13V 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=342c421b18e89dd3eadda10040715853 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3mU 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 342c421b18e89dd3eadda10040715853 0 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 342c421b18e89dd3eadda10040715853 0 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=342c421b18e89dd3eadda10040715853 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3mU 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3mU 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3mU 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d0e4aa98219103bec6568878ba90967c62383ba3436b2e05873a0712656a13f8 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.v8N 00:35:48.987 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d0e4aa98219103bec6568878ba90967c62383ba3436b2e05873a0712656a13f8 3 00:35:48.988 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d0e4aa98219103bec6568878ba90967c62383ba3436b2e05873a0712656a13f8 3 00:35:48.988 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:48.988 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:48.988 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d0e4aa98219103bec6568878ba90967c62383ba3436b2e05873a0712656a13f8 00:35:48.988 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:48.988 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.v8N 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.v8N 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.v8N 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 289381 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 289381 ']' 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:49.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.KYs 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.b0J ]] 00:35:49.248 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b0J 00:35:49.249 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.249 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.hPp 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Rnt ]] 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Rnt 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.XeQ 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.l18 ]] 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.l18 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.13V 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3mU ]] 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3mU 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.v8N 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.510 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:49.511 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.511 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:49.511 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:49.511 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:49.511 20:27:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:49.511 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:49.511 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:49.511 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:49.511 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:49.511 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:49.511 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:35:49.511 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:49.511 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:49.511 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:49.511 20:27:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:53.717 Waiting for block devices as requested 00:35:53.717 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:53.717 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:53.717 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:53.717 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:53.717 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:53.717 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:53.717 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:53.717 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:53.977 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:53.977 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:54.238 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:54.238 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:54.238 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:54.498 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:54.498 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:54.498 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:54.498 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:55.439 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:55.439 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:55.439 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:55.439 20:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:35:55.439 20:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:55.439 20:27:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:35:55.439 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:55.439 20:27:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:55.439 20:27:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:55.439 No valid GPT data, bailing 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:55.700 20:27:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:35:55.700 00:35:55.700 Discovery Log Number of Records 2, Generation counter 2 00:35:55.700 =====Discovery Log Entry 0====== 00:35:55.700 trtype: tcp 00:35:55.700 adrfam: ipv4 00:35:55.700 subtype: current discovery subsystem 00:35:55.700 treq: not specified, sq flow control disable supported 00:35:55.700 portid: 1 00:35:55.700 trsvcid: 4420 00:35:55.700 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:55.700 traddr: 10.0.0.1 00:35:55.700 eflags: none 00:35:55.700 sectype: none 00:35:55.700 =====Discovery Log Entry 1====== 00:35:55.700 trtype: tcp 00:35:55.700 adrfam: ipv4 00:35:55.700 subtype: nvme subsystem 00:35:55.700 treq: not specified, sq flow control disable supported 00:35:55.700 portid: 1 00:35:55.700 trsvcid: 4420 00:35:55.700 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:55.700 traddr: 10.0.0.1 00:35:55.700 eflags: none 00:35:55.700 sectype: none 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.700 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.961 nvme0n1 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: ]] 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.961 nvme0n1 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.961 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.962 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:55.962 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.222 nvme0n1 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.222 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.482 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: ]] 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.483 nvme0n1 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: ]] 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.483 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.744 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.744 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:56.744 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:56.744 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:56.744 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.744 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.744 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:56.744 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.744 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:56.744 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:56.744 20:27:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:56.744 20:27:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:56.744 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.744 20:27:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.744 nvme0n1 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.744 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.005 nvme0n1 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: ]] 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.005 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.266 nvme0n1 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.266 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.527 nvme0n1 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: ]] 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.527 20:27:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.788 nvme0n1 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: ]] 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.788 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.049 nvme0n1 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.049 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.309 nvme0n1 00:35:58.309 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.309 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.309 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: ]] 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.310 20:27:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.570 nvme0n1 00:35:58.570 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.570 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.570 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.570 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.570 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.570 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.570 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.570 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.570 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.570 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.570 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.570 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.570 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.571 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.831 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.831 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.831 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:58.831 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:58.831 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:58.831 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.831 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.831 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:58.831 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.831 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:58.831 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:58.831 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:58.831 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:58.831 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.831 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.091 nvme0n1 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: ]] 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:59.091 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.092 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.352 nvme0n1 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: ]] 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.352 20:27:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.613 nvme0n1 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:59.613 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.614 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:59.614 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:59.614 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:59.614 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:59.614 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.614 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.186 nvme0n1 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: ]] 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.186 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.448 nvme0n1 00:36:00.448 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.448 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.448 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.448 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.448 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.448 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:00.708 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.709 20:27:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.969 nvme0n1 00:36:00.969 20:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:00.969 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.969 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.969 20:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.969 20:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: ]] 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.230 20:27:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.800 nvme0n1 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: ]] 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.800 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.801 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:01.801 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.801 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:01.801 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:01.801 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:01.801 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:01.801 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.801 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.061 nvme0n1 00:36:02.061 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.061 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.061 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.061 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.061 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.321 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.321 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.321 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.322 20:27:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.582 nvme0n1 00:36:02.582 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.842 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.842 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.842 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.842 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.842 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.842 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.842 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.842 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.842 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.842 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.842 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:02.842 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: ]] 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.843 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.413 nvme0n1 00:36:03.413 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.413 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.413 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.413 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.413 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.673 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.674 20:27:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.245 nvme0n1 00:36:04.245 20:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.245 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.245 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.245 20:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.245 20:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.245 20:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: ]] 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.505 20:27:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.076 nvme0n1 00:36:05.076 20:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.076 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.076 20:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.076 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.076 20:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.076 20:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: ]] 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.336 20:27:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.949 nvme0n1 00:36:05.949 20:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.949 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.949 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.949 20:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:05.950 20:27:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.891 nvme0n1 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: ]] 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.891 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.152 nvme0n1 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.152 nvme0n1 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.152 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: ]] 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.413 nvme0n1 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.413 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: ]] 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.673 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:07.674 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:07.674 20:27:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:07.674 20:27:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:07.674 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.674 20:27:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.674 nvme0n1 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.674 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.933 nvme0n1 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: ]] 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.933 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.934 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.194 nvme0n1 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.194 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:08.195 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:08.195 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:08.195 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:08.195 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.195 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.455 nvme0n1 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: ]] 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.455 20:28:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.719 nvme0n1 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: ]] 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.719 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.020 nvme0n1 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.020 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.349 nvme0n1 00:36:09.349 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.349 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.349 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.349 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.349 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.349 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.349 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.349 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.349 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.349 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.349 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.349 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:09.349 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: ]] 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.350 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.610 nvme0n1 00:36:09.610 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.610 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.610 20:28:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.610 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.610 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.610 20:28:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.610 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.870 nvme0n1 00:36:09.870 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.870 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.870 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.870 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.870 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.870 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: ]] 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.132 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.394 nvme0n1 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: ]] 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.394 20:28:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.654 nvme0n1 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.654 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.914 nvme0n1 00:36:10.914 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:10.914 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.914 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:10.914 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.914 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.914 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: ]] 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.174 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.433 nvme0n1 00:36:11.433 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.433 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.433 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.433 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.433 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.694 20:28:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:11.694 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:11.694 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:11.694 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.694 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.694 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:11.694 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.694 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:11.694 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:11.694 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:11.694 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:11.694 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.694 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.264 nvme0n1 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: ]] 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.264 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.524 nvme0n1 00:36:12.524 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.524 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.524 20:28:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.524 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.524 20:28:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.524 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: ]] 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.784 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.044 nvme0n1 00:36:13.044 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.044 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.044 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.044 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.044 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.304 20:28:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.874 nvme0n1 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: ]] 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:13.874 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.443 nvme0n1 00:36:14.443 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.443 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.443 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.443 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.443 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.443 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:14.703 20:28:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.273 nvme0n1 00:36:15.273 20:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.273 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.273 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.273 20:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.274 20:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.274 20:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.274 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.274 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.274 20:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.274 20:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: ]] 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:15.534 20:28:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.104 nvme0n1 00:36:16.104 20:28:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.104 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.104 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.104 20:28:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.104 20:28:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.104 20:28:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.104 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.104 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.104 20:28:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.104 20:28:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: ]] 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.364 20:28:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.365 20:28:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:16.365 20:28:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.365 20:28:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:16.365 20:28:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:16.365 20:28:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:16.365 20:28:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:16.365 20:28:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.365 20:28:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.935 nvme0n1 00:36:16.935 20:28:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.935 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.935 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.935 20:28:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.935 20:28:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.935 20:28:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.935 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.935 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.935 20:28:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.935 20:28:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.195 20:28:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.765 nvme0n1 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: ]] 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.765 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.025 nvme0n1 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.025 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.026 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.286 nvme0n1 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: ]] 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:18.286 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.287 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.547 nvme0n1 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: ]] 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:18.547 20:28:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:18.548 20:28:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:18.548 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.548 20:28:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.808 nvme0n1 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.808 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.069 nvme0n1 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: ]] 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:19.069 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:19.070 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.070 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.070 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:19.070 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.070 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:19.070 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:19.070 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:19.070 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:19.070 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.070 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.329 nvme0n1 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:19.329 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.330 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.589 nvme0n1 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: ]] 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.589 20:28:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.849 nvme0n1 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: ]] 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:19.849 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:19.850 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:19.850 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.850 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.109 nvme0n1 00:36:20.109 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.109 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.109 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.109 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.109 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.109 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.109 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.109 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.109 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.109 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.109 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.109 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.110 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.370 nvme0n1 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: ]] 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.370 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.371 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.371 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.371 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:20.371 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:20.371 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:20.371 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.371 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.371 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:20.371 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.371 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:20.371 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:20.371 20:28:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:20.371 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:20.371 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.371 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.629 nvme0n1 00:36:20.629 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.629 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.629 20:28:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.629 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.629 20:28:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:20.629 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.630 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.889 nvme0n1 00:36:20.889 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.889 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.889 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.889 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.889 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.889 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.889 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.889 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.889 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.889 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: ]] 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.150 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.411 nvme0n1 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: ]] 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:21.411 20:28:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:21.412 20:28:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:21.412 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.412 20:28:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.671 nvme0n1 00:36:21.671 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.671 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.671 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.672 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.932 nvme0n1 00:36:21.932 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.932 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.932 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.932 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.932 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: ]] 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.193 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.766 nvme0n1 00:36:22.766 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.766 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.766 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.766 20:28:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.766 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.766 20:28:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:22.766 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.026 nvme0n1 00:36:23.026 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: ]] 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.287 20:28:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.547 nvme0n1 00:36:23.547 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.547 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.547 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.547 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.547 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.547 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: ]] 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:23.808 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.069 nvme0n1 00:36:24.069 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.069 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.329 20:28:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.900 nvme0n1 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTQzZTk4MmI3MjkwMzBkNjMzZjkwMjYxYWViOTM2NzSnbqGc: 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: ]] 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNTIwYjkyNjUwZTViMmY3ZDc3YTA3MTAyNDMwZjJkOWJiYTI1MjBiNDZhYzU5ZDY5ODJhNjZjODY0Mjk3YbzVIas=: 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.900 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.471 nvme0n1 00:36:25.471 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.471 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.471 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.471 20:28:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.471 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.471 20:28:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:25.732 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:25.733 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:25.733 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.733 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.307 nvme0n1 00:36:26.307 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.307 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.307 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.307 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.307 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.307 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.307 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.307 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.307 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.307 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0NjlkZTViZTBmODFmOThiNDk1NTNjMWU1ZTRiZGGn6oKx: 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: ]] 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjM3YmNlZjljMDQyM2Y1ZDI0YmFlYWQyNGJjMjZkOTgHN7zj: 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.568 20:28:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.139 nvme0n1 00:36:27.139 20:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.139 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.139 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.139 20:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.139 20:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.139 20:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.139 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.139 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.139 20:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.139 20:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmQ2ODdkOTYwZDZjMzhmZjg0MDU4ZjNmODc0OWU5YWY5YjA0MzhlNGFkYzY0MGYx/mHpRQ==: 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: ]] 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQyYzQyMWIxOGU4OWRkM2VhZGRhMTAwNDA3MTU4NTM8FqpS: 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.400 20:28:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.971 nvme0n1 00:36:27.971 20:28:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.971 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.971 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.971 20:28:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.971 20:28:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.971 20:28:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.971 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.971 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.971 20:28:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.971 20:28:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDBlNGFhOTgyMTkxMDNiZWM2NTY4ODc4YmE5MDk2N2M2MjM4M2JhMzQzNmIyZTA1ODczYTA3MTI2NTZhMTNmOHDAc9w=: 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.232 20:28:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.803 nvme0n1 00:36:28.803 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.803 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.803 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.803 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.803 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.803 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:28.803 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.803 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.803 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:28.803 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.064 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.064 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:29.064 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.064 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:29.064 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:29.064 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:29.064 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:29.064 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:29.064 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:29.064 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:29.064 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzliMDIzOTcyNDk0ZTUyNWVkM2ZmODRkZWM4NmU2MGJkMmI1YzRjMDEyMDJiYTdiCV2TxQ==: 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: ]] 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk5M2QwMWM2YTE1MjhjMjY3NTcxNzFkOWEwYmIzMWEwMDhiYWYyYzIyOTgyMGNhfpzQHA==: 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.065 request: 00:36:29.065 { 00:36:29.065 "name": "nvme0", 00:36:29.065 "trtype": "tcp", 00:36:29.065 "traddr": "10.0.0.1", 00:36:29.065 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:29.065 "adrfam": "ipv4", 00:36:29.065 "trsvcid": "4420", 00:36:29.065 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:29.065 "method": "bdev_nvme_attach_controller", 00:36:29.065 "req_id": 1 00:36:29.065 } 00:36:29.065 Got JSON-RPC error response 00:36:29.065 response: 00:36:29.065 { 00:36:29.065 "code": -32602, 00:36:29.065 "message": "Invalid parameters" 00:36:29.065 } 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.065 request: 00:36:29.065 { 00:36:29.065 "name": "nvme0", 00:36:29.065 "trtype": "tcp", 00:36:29.065 "traddr": "10.0.0.1", 00:36:29.065 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:29.065 "adrfam": "ipv4", 00:36:29.065 "trsvcid": "4420", 00:36:29.065 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:29.065 "dhchap_key": "key2", 00:36:29.065 "method": "bdev_nvme_attach_controller", 00:36:29.065 "req_id": 1 00:36:29.065 } 00:36:29.065 Got JSON-RPC error response 00:36:29.065 response: 00:36:29.065 { 00:36:29.065 "code": -32602, 00:36:29.065 "message": "Invalid parameters" 00:36:29.065 } 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.065 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:29.325 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:29.325 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:29.325 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:29.325 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:29.325 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:29.325 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.325 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.325 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:29.325 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.325 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:29.325 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:29.325 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:29.325 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:29.325 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.326 request: 00:36:29.326 { 00:36:29.326 "name": "nvme0", 00:36:29.326 "trtype": "tcp", 00:36:29.326 "traddr": "10.0.0.1", 00:36:29.326 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:29.326 "adrfam": "ipv4", 00:36:29.326 "trsvcid": "4420", 00:36:29.326 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:29.326 "dhchap_key": "key1", 00:36:29.326 "dhchap_ctrlr_key": "ckey2", 00:36:29.326 "method": "bdev_nvme_attach_controller", 00:36:29.326 "req_id": 1 00:36:29.326 } 00:36:29.326 Got JSON-RPC error response 00:36:29.326 response: 00:36:29.326 { 00:36:29.326 "code": -32602, 00:36:29.326 "message": "Invalid parameters" 00:36:29.326 } 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:29.326 rmmod nvme_tcp 00:36:29.326 rmmod nvme_fabrics 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 289381 ']' 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 289381 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 289381 ']' 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 289381 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 289381 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 289381' 00:36:29.326 killing process with pid 289381 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 289381 00:36:29.326 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 289381 00:36:29.586 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:29.586 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:29.586 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:29.586 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:29.586 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:29.586 20:28:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:29.586 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:29.586 20:28:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:31.498 20:28:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:31.498 20:28:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:31.498 20:28:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:31.498 20:28:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:31.498 20:28:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:31.498 20:28:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:36:31.498 20:28:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:31.498 20:28:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:31.758 20:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:31.758 20:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:31.758 20:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:31.759 20:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:31.759 20:28:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:35.967 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:35.967 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:36.228 20:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.KYs /tmp/spdk.key-null.hPp /tmp/spdk.key-sha256.XeQ /tmp/spdk.key-sha384.13V /tmp/spdk.key-sha512.v8N /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:36.228 20:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:40.435 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:40.435 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:40.435 00:36:40.435 real 1m1.005s 00:36:40.435 user 0m53.233s 00:36:40.435 sys 0m16.920s 00:36:40.435 20:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:40.435 20:28:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.435 ************************************ 00:36:40.435 END TEST nvmf_auth_host 00:36:40.435 ************************************ 00:36:40.435 20:28:32 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:36:40.435 20:28:32 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:40.435 20:28:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:36:40.435 20:28:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:40.435 20:28:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:40.435 ************************************ 00:36:40.435 START TEST nvmf_digest 00:36:40.435 ************************************ 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:40.435 * Looking for test storage... 00:36:40.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:40.435 20:28:32 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:40.436 20:28:32 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:40.436 20:28:32 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:40.436 20:28:32 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:40.436 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:40.436 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:40.436 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:40.436 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:40.436 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:40.436 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:40.436 20:28:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:40.436 20:28:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.436 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:40.436 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:40.436 20:28:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:36:40.436 20:28:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:48.640 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:48.640 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:36:48.640 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:48.640 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:48.640 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:48.640 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:48.640 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:48.641 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:48.641 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:48.641 Found net devices under 0000:31:00.0: cvl_0_0 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:48.641 Found net devices under 0000:31:00.1: cvl_0_1 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:48.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:48.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:36:48.641 00:36:48.641 --- 10.0.0.2 ping statistics --- 00:36:48.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:48.641 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:48.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:48.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:36:48.641 00:36:48.641 --- 10.0.0.1 ping statistics --- 00:36:48.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:48.641 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:48.641 20:28:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:48.641 20:28:41 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:48.641 20:28:41 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:48.641 20:28:41 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:48.641 20:28:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:48.641 20:28:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:48.641 20:28:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:48.641 ************************************ 00:36:48.641 START TEST nvmf_digest_clean 00:36:48.641 ************************************ 00:36:48.641 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:36:48.641 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:48.641 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:48.641 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:48.641 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:48.642 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:48.642 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:48.642 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:48.642 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:48.642 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=307051 00:36:48.642 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 307051 00:36:48.642 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 307051 ']' 00:36:48.642 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:48.642 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.642 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:48.642 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.642 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:48.642 20:28:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:48.903 [2024-05-15 20:28:41.144486] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:36:48.903 [2024-05-15 20:28:41.144543] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:48.903 EAL: No free 2048 kB hugepages reported on node 1 00:36:48.903 [2024-05-15 20:28:41.233005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.903 [2024-05-15 20:28:41.326338] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:48.903 [2024-05-15 20:28:41.326399] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:48.903 [2024-05-15 20:28:41.326407] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:48.903 [2024-05-15 20:28:41.326415] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:48.903 [2024-05-15 20:28:41.326421] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:48.903 [2024-05-15 20:28:41.326449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:49.847 null0 00:36:49.847 [2024-05-15 20:28:42.163889] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:49.847 [2024-05-15 20:28:42.187871] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:49.847 [2024-05-15 20:28:42.188164] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=307314 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 307314 /var/tmp/bperf.sock 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 307314 ']' 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:49.847 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:49.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:49.848 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:49.848 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:49.848 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:49.848 [2024-05-15 20:28:42.239894] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:36:49.848 [2024-05-15 20:28:42.239957] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307314 ] 00:36:49.848 EAL: No free 2048 kB hugepages reported on node 1 00:36:49.848 [2024-05-15 20:28:42.310947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:50.108 [2024-05-15 20:28:42.384335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:50.108 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:50.108 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:36:50.108 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:50.108 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:50.108 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:50.368 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:50.368 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:50.628 nvme0n1 00:36:50.628 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:50.628 20:28:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:50.628 Running I/O for 2 seconds... 00:36:53.170 00:36:53.170 Latency(us) 00:36:53.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:53.170 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:53.170 nvme0n1 : 2.01 20542.87 80.25 0.00 0.00 6220.49 2962.77 19333.12 00:36:53.170 =================================================================================================================== 00:36:53.170 Total : 20542.87 80.25 0.00 0.00 6220.49 2962.77 19333.12 00:36:53.170 0 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:53.170 | select(.opcode=="crc32c") 00:36:53.170 | "\(.module_name) \(.executed)"' 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 307314 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 307314 ']' 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 307314 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 307314 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 307314' 00:36:53.170 killing process with pid 307314 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 307314 00:36:53.170 Received shutdown signal, test time was about 2.000000 seconds 00:36:53.170 00:36:53.170 Latency(us) 00:36:53.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:53.170 =================================================================================================================== 00:36:53.170 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 307314 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=307852 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 307852 /var/tmp/bperf.sock 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 307852 ']' 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:53.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:53.170 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:53.170 [2024-05-15 20:28:45.510889] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:36:53.170 [2024-05-15 20:28:45.510948] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307852 ] 00:36:53.170 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:53.170 Zero copy mechanism will not be used. 00:36:53.170 EAL: No free 2048 kB hugepages reported on node 1 00:36:53.171 [2024-05-15 20:28:45.576471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.171 [2024-05-15 20:28:45.640993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:53.431 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:53.431 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:36:53.431 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:53.431 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:53.431 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:53.691 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:53.691 20:28:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:53.951 nvme0n1 00:36:53.951 20:28:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:53.951 20:28:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:53.951 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:53.951 Zero copy mechanism will not be used. 00:36:53.951 Running I/O for 2 seconds... 00:36:56.493 00:36:56.493 Latency(us) 00:36:56.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:56.493 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:56.493 nvme0n1 : 2.00 2392.86 299.11 0.00 0.00 6682.66 1529.17 10158.08 00:36:56.493 =================================================================================================================== 00:36:56.493 Total : 2392.86 299.11 0.00 0.00 6682.66 1529.17 10158.08 00:36:56.493 0 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:56.493 | select(.opcode=="crc32c") 00:36:56.493 | "\(.module_name) \(.executed)"' 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 307852 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 307852 ']' 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 307852 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 307852 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 307852' 00:36:56.493 killing process with pid 307852 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 307852 00:36:56.493 Received shutdown signal, test time was about 2.000000 seconds 00:36:56.493 00:36:56.493 Latency(us) 00:36:56.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:56.493 =================================================================================================================== 00:36:56.493 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 307852 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=308454 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 308454 /var/tmp/bperf.sock 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 308454 ']' 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:56.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:56.493 20:28:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:56.493 [2024-05-15 20:28:48.917353] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:36:56.493 [2024-05-15 20:28:48.917407] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308454 ] 00:36:56.493 EAL: No free 2048 kB hugepages reported on node 1 00:36:56.493 [2024-05-15 20:28:48.982901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:56.754 [2024-05-15 20:28:49.045982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:56.754 20:28:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:56.754 20:28:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:36:56.754 20:28:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:56.754 20:28:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:56.754 20:28:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:57.014 20:28:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:57.014 20:28:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:57.274 nvme0n1 00:36:57.274 20:28:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:57.275 20:28:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:57.275 Running I/O for 2 seconds... 00:36:59.818 00:36:59.818 Latency(us) 00:36:59.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:59.818 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:59.818 nvme0n1 : 2.01 20868.45 81.52 0.00 0.00 6120.23 5160.96 15182.51 00:36:59.819 =================================================================================================================== 00:36:59.819 Total : 20868.45 81.52 0.00 0.00 6120.23 5160.96 15182.51 00:36:59.819 0 00:36:59.819 20:28:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:59.819 20:28:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:59.819 20:28:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:59.819 20:28:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:59.819 | select(.opcode=="crc32c") 00:36:59.819 | "\(.module_name) \(.executed)"' 00:36:59.819 20:28:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:59.819 20:28:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:59.819 20:28:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:59.819 20:28:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:59.819 20:28:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:59.819 20:28:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 308454 00:36:59.819 20:28:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 308454 ']' 00:36:59.819 20:28:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 308454 00:36:59.819 20:28:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:36:59.819 20:28:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:59.819 20:28:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 308454 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 308454' 00:36:59.819 killing process with pid 308454 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 308454 00:36:59.819 Received shutdown signal, test time was about 2.000000 seconds 00:36:59.819 00:36:59.819 Latency(us) 00:36:59.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:59.819 =================================================================================================================== 00:36:59.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 308454 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=309112 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 309112 /var/tmp/bperf.sock 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 309112 ']' 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:59.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:59.819 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:59.819 [2024-05-15 20:28:52.203094] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:36:59.819 [2024-05-15 20:28:52.203147] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid309112 ] 00:36:59.819 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:59.819 Zero copy mechanism will not be used. 00:36:59.819 EAL: No free 2048 kB hugepages reported on node 1 00:36:59.819 [2024-05-15 20:28:52.268611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:00.080 [2024-05-15 20:28:52.331920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:00.080 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:00.080 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:37:00.080 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:00.080 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:00.080 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:00.342 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:00.342 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:00.611 nvme0n1 00:37:00.611 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:00.611 20:28:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:00.611 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:00.611 Zero copy mechanism will not be used. 00:37:00.611 Running I/O for 2 seconds... 00:37:03.157 00:37:03.157 Latency(us) 00:37:03.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.157 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:03.157 nvme0n1 : 2.00 3404.05 425.51 0.00 0.00 4691.23 2921.81 18786.99 00:37:03.157 =================================================================================================================== 00:37:03.157 Total : 3404.05 425.51 0.00 0.00 4691.23 2921.81 18786.99 00:37:03.157 0 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:03.157 | select(.opcode=="crc32c") 00:37:03.157 | "\(.module_name) \(.executed)"' 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 309112 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 309112 ']' 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 309112 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 309112 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 309112' 00:37:03.157 killing process with pid 309112 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 309112 00:37:03.157 Received shutdown signal, test time was about 2.000000 seconds 00:37:03.157 00:37:03.157 Latency(us) 00:37:03.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.157 =================================================================================================================== 00:37:03.157 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 309112 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 307051 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 307051 ']' 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 307051 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 307051 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 307051' 00:37:03.157 killing process with pid 307051 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 307051 00:37:03.157 [2024-05-15 20:28:55.501443] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:03.157 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 307051 00:37:03.157 00:37:03.157 real 0m14.560s 00:37:03.157 user 0m28.612s 00:37:03.157 sys 0m3.307s 00:37:03.158 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:03.158 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:03.158 ************************************ 00:37:03.158 END TEST nvmf_digest_clean 00:37:03.158 ************************************ 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:03.419 ************************************ 00:37:03.419 START TEST nvmf_digest_error 00:37:03.419 ************************************ 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=309819 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 309819 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 309819 ']' 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:03.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:03.419 20:28:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:03.419 [2024-05-15 20:28:55.765238] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:37:03.419 [2024-05-15 20:28:55.765285] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:03.419 EAL: No free 2048 kB hugepages reported on node 1 00:37:03.419 [2024-05-15 20:28:55.854751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:03.419 [2024-05-15 20:28:55.919096] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:03.419 [2024-05-15 20:28:55.919128] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:03.419 [2024-05-15 20:28:55.919136] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:03.419 [2024-05-15 20:28:55.919142] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:03.419 [2024-05-15 20:28:55.919147] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:03.419 [2024-05-15 20:28:55.919164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:04.361 [2024-05-15 20:28:56.661263] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:04.361 null0 00:37:04.361 [2024-05-15 20:28:56.737956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:04.361 [2024-05-15 20:28:56.761952] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:04.361 [2024-05-15 20:28:56.762173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=310162 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 310162 /var/tmp/bperf.sock 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 310162 ']' 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:04.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:04.361 20:28:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:04.361 [2024-05-15 20:28:56.812550] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:37:04.361 [2024-05-15 20:28:56.812597] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310162 ] 00:37:04.361 EAL: No free 2048 kB hugepages reported on node 1 00:37:04.622 [2024-05-15 20:28:56.876281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:04.622 [2024-05-15 20:28:56.940299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:04.622 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:04.622 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:37:04.622 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:04.622 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:04.882 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:04.882 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.882 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:04.882 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.882 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:04.882 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:05.142 nvme0n1 00:37:05.142 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:05.142 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.142 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:05.142 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.142 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:05.142 20:28:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:05.142 Running I/O for 2 seconds... 00:37:05.142 [2024-05-15 20:28:57.639188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.142 [2024-05-15 20:28:57.639222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.142 [2024-05-15 20:28:57.639234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.653180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.653204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.653213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.664935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.664956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.664965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.678403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.678424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.678434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.690332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.690353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.690363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.701859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.701879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.701888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.714708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.714729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.714738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.726510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.726531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.726542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.738282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.738303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.738311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.751171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.751191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.751200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.763480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.763501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.763509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.775998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.776019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.776027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.788793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.788814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.788822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.800958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.800979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.800987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.812937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.812963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.812972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.825007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.825028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.825036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.836979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.837000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.837009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.850033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.850054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.850063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.862722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.862743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.862752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.874158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.874180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.403 [2024-05-15 20:28:57.874188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.403 [2024-05-15 20:28:57.887349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.403 [2024-05-15 20:28:57.887370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.404 [2024-05-15 20:28:57.887379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.404 [2024-05-15 20:28:57.898011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.404 [2024-05-15 20:28:57.898032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.404 [2024-05-15 20:28:57.898041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.664 [2024-05-15 20:28:57.912023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.664 [2024-05-15 20:28:57.912044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.664 [2024-05-15 20:28:57.912053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.664 [2024-05-15 20:28:57.924787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.664 [2024-05-15 20:28:57.924808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.664 [2024-05-15 20:28:57.924816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.664 [2024-05-15 20:28:57.935537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.664 [2024-05-15 20:28:57.935558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.664 [2024-05-15 20:28:57.935566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.664 [2024-05-15 20:28:57.949679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.664 [2024-05-15 20:28:57.949700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.664 [2024-05-15 20:28:57.949708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.664 [2024-05-15 20:28:57.963309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.664 [2024-05-15 20:28:57.963334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.664 [2024-05-15 20:28:57.963342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.664 [2024-05-15 20:28:57.974508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.664 [2024-05-15 20:28:57.974529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:57.974537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.665 [2024-05-15 20:28:57.987323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.665 [2024-05-15 20:28:57.987344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:57.987352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.665 [2024-05-15 20:28:57.999663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.665 [2024-05-15 20:28:57.999683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:57.999693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.665 [2024-05-15 20:28:58.012738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.665 [2024-05-15 20:28:58.012760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:58.012770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.665 [2024-05-15 20:28:58.024503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.665 [2024-05-15 20:28:58.024523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:58.024536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.665 [2024-05-15 20:28:58.036404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.665 [2024-05-15 20:28:58.036424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:58.036432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.665 [2024-05-15 20:28:58.052501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.665 [2024-05-15 20:28:58.052522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:58.052531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.665 [2024-05-15 20:28:58.065460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.665 [2024-05-15 20:28:58.065482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:58.065491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.665 [2024-05-15 20:28:58.076381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.665 [2024-05-15 20:28:58.076402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:58.076411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.665 [2024-05-15 20:28:58.089283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.665 [2024-05-15 20:28:58.089305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:58.089318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.665 [2024-05-15 20:28:58.101203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.665 [2024-05-15 20:28:58.101224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:58.101233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.665 [2024-05-15 20:28:58.112214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.665 [2024-05-15 20:28:58.112234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:58.112243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.665 [2024-05-15 20:28:58.126122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.665 [2024-05-15 20:28:58.126143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:58.126152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.665 [2024-05-15 20:28:58.138457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.665 [2024-05-15 20:28:58.138482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:58.138491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.665 [2024-05-15 20:28:58.149924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.665 [2024-05-15 20:28:58.149946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:58.149954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.665 [2024-05-15 20:28:58.163446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.665 [2024-05-15 20:28:58.163467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.665 [2024-05-15 20:28:58.163476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.926 [2024-05-15 20:28:58.175754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.926 [2024-05-15 20:28:58.175774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.926 [2024-05-15 20:28:58.175783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.926 [2024-05-15 20:28:58.187783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.926 [2024-05-15 20:28:58.187804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.926 [2024-05-15 20:28:58.187813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.926 [2024-05-15 20:28:58.199536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.926 [2024-05-15 20:28:58.199556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.926 [2024-05-15 20:28:58.199565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.926 [2024-05-15 20:28:58.213354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.926 [2024-05-15 20:28:58.213375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.926 [2024-05-15 20:28:58.213383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.926 [2024-05-15 20:28:58.224979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.926 [2024-05-15 20:28:58.224999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.926 [2024-05-15 20:28:58.225008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.926 [2024-05-15 20:28:58.236517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.926 [2024-05-15 20:28:58.236538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.926 [2024-05-15 20:28:58.236546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.926 [2024-05-15 20:28:58.249615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.927 [2024-05-15 20:28:58.249635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.927 [2024-05-15 20:28:58.249644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.927 [2024-05-15 20:28:58.263617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.927 [2024-05-15 20:28:58.263638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.927 [2024-05-15 20:28:58.263646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.927 [2024-05-15 20:28:58.274223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.927 [2024-05-15 20:28:58.274244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.927 [2024-05-15 20:28:58.274253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.927 [2024-05-15 20:28:58.287293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.927 [2024-05-15 20:28:58.287318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.927 [2024-05-15 20:28:58.287327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.927 [2024-05-15 20:28:58.299180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.927 [2024-05-15 20:28:58.299201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.927 [2024-05-15 20:28:58.299210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.927 [2024-05-15 20:28:58.312068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.927 [2024-05-15 20:28:58.312089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.927 [2024-05-15 20:28:58.312097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.927 [2024-05-15 20:28:58.322621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.927 [2024-05-15 20:28:58.322641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.927 [2024-05-15 20:28:58.322650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.927 [2024-05-15 20:28:58.336547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.927 [2024-05-15 20:28:58.336568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.927 [2024-05-15 20:28:58.336576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.927 [2024-05-15 20:28:58.350406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.927 [2024-05-15 20:28:58.350426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.927 [2024-05-15 20:28:58.350439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.927 [2024-05-15 20:28:58.362401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.927 [2024-05-15 20:28:58.362422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.927 [2024-05-15 20:28:58.362431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.927 [2024-05-15 20:28:58.374033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.927 [2024-05-15 20:28:58.374054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.927 [2024-05-15 20:28:58.374063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.927 [2024-05-15 20:28:58.387520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.927 [2024-05-15 20:28:58.387541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.927 [2024-05-15 20:28:58.387549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.927 [2024-05-15 20:28:58.400050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.927 [2024-05-15 20:28:58.400070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.927 [2024-05-15 20:28:58.400079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.927 [2024-05-15 20:28:58.411721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.927 [2024-05-15 20:28:58.411741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.927 [2024-05-15 20:28:58.411751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:05.927 [2024-05-15 20:28:58.423929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:05.927 [2024-05-15 20:28:58.423949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:05.927 [2024-05-15 20:28:58.423958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.188 [2024-05-15 20:28:58.435989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.188 [2024-05-15 20:28:58.436010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.188 [2024-05-15 20:28:58.436019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.188 [2024-05-15 20:28:58.449577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.188 [2024-05-15 20:28:58.449598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.188 [2024-05-15 20:28:58.449606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.188 [2024-05-15 20:28:58.460430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.188 [2024-05-15 20:28:58.460451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.188 [2024-05-15 20:28:58.460459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.188 [2024-05-15 20:28:58.474515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.188 [2024-05-15 20:28:58.474535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.188 [2024-05-15 20:28:58.474544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.188 [2024-05-15 20:28:58.486668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.188 [2024-05-15 20:28:58.486689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.188 [2024-05-15 20:28:58.486697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.188 [2024-05-15 20:28:58.498395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.188 [2024-05-15 20:28:58.498415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.188 [2024-05-15 20:28:58.498424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.188 [2024-05-15 20:28:58.511708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.188 [2024-05-15 20:28:58.511728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.188 [2024-05-15 20:28:58.511737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.188 [2024-05-15 20:28:58.523165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.188 [2024-05-15 20:28:58.523185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.188 [2024-05-15 20:28:58.523194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.188 [2024-05-15 20:28:58.537071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.188 [2024-05-15 20:28:58.537092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.188 [2024-05-15 20:28:58.537101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.189 [2024-05-15 20:28:58.551243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.189 [2024-05-15 20:28:58.551263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.189 [2024-05-15 20:28:58.551272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.189 [2024-05-15 20:28:58.561903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.189 [2024-05-15 20:28:58.561924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.189 [2024-05-15 20:28:58.561936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.189 [2024-05-15 20:28:58.574855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.189 [2024-05-15 20:28:58.574875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.189 [2024-05-15 20:28:58.574885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.189 [2024-05-15 20:28:58.588513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.189 [2024-05-15 20:28:58.588533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.189 [2024-05-15 20:28:58.588541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.189 [2024-05-15 20:28:58.602747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.189 [2024-05-15 20:28:58.602768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.189 [2024-05-15 20:28:58.602776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.189 [2024-05-15 20:28:58.614268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.189 [2024-05-15 20:28:58.614288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.189 [2024-05-15 20:28:58.614297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.189 [2024-05-15 20:28:58.628712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.189 [2024-05-15 20:28:58.628733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.189 [2024-05-15 20:28:58.628741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.189 [2024-05-15 20:28:58.642070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.189 [2024-05-15 20:28:58.642090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.189 [2024-05-15 20:28:58.642099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.189 [2024-05-15 20:28:58.653057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.189 [2024-05-15 20:28:58.653077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.189 [2024-05-15 20:28:58.653086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.189 [2024-05-15 20:28:58.667589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.189 [2024-05-15 20:28:58.667609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.189 [2024-05-15 20:28:58.667618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.189 [2024-05-15 20:28:58.680295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.189 [2024-05-15 20:28:58.680323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.189 [2024-05-15 20:28:58.680332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.690952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.690973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.690981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.704504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.704525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.704533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.717434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.717454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.717463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.730449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.730470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.730478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.742096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.742116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.742125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.756032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.756053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.756062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.767828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.767848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.767857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.781182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.781203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.781211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.793290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.793310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.793324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.805831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.805851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.805859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.819149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.819170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.819178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.830337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.830357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.830366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.843254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.843274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.843282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.854845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.854865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.854873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.868191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.868211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.868220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.879958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.879978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.879987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.891141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.891161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.891173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.904731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.904752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.904760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.916902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.916923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.916931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.927603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.927623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.927632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.450 [2024-05-15 20:28:58.942000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.450 [2024-05-15 20:28:58.942020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.450 [2024-05-15 20:28:58.942029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.711 [2024-05-15 20:28:58.954802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.711 [2024-05-15 20:28:58.954823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.711 [2024-05-15 20:28:58.954831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.711 [2024-05-15 20:28:58.967877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.711 [2024-05-15 20:28:58.967897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.711 [2024-05-15 20:28:58.967906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.711 [2024-05-15 20:28:58.979978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.711 [2024-05-15 20:28:58.979998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.711 [2024-05-15 20:28:58.980007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.711 [2024-05-15 20:28:58.993497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.711 [2024-05-15 20:28:58.993517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.711 [2024-05-15 20:28:58.993526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.711 [2024-05-15 20:28:59.006107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.711 [2024-05-15 20:28:59.006127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.711 [2024-05-15 20:28:59.006136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.711 [2024-05-15 20:28:59.017002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.711 [2024-05-15 20:28:59.017023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.711 [2024-05-15 20:28:59.017031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.711 [2024-05-15 20:28:59.029350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.711 [2024-05-15 20:28:59.029371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.711 [2024-05-15 20:28:59.029380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.711 [2024-05-15 20:28:59.043003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.711 [2024-05-15 20:28:59.043023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.711 [2024-05-15 20:28:59.043031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.711 [2024-05-15 20:28:59.054820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.712 [2024-05-15 20:28:59.054840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.712 [2024-05-15 20:28:59.054848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.712 [2024-05-15 20:28:59.066945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.712 [2024-05-15 20:28:59.066966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.712 [2024-05-15 20:28:59.066974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.712 [2024-05-15 20:28:59.080251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.712 [2024-05-15 20:28:59.080272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.712 [2024-05-15 20:28:59.080280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.712 [2024-05-15 20:28:59.091389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.712 [2024-05-15 20:28:59.091410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.712 [2024-05-15 20:28:59.091418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.712 [2024-05-15 20:28:59.104810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.712 [2024-05-15 20:28:59.104831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.712 [2024-05-15 20:28:59.104843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.712 [2024-05-15 20:28:59.117816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.712 [2024-05-15 20:28:59.117837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.712 [2024-05-15 20:28:59.117845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.712 [2024-05-15 20:28:59.129859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.712 [2024-05-15 20:28:59.129880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.712 [2024-05-15 20:28:59.129888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.712 [2024-05-15 20:28:59.142662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.712 [2024-05-15 20:28:59.142683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.712 [2024-05-15 20:28:59.142691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.712 [2024-05-15 20:28:59.154796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.712 [2024-05-15 20:28:59.154815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.712 [2024-05-15 20:28:59.154824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.712 [2024-05-15 20:28:59.165695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.712 [2024-05-15 20:28:59.165715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.712 [2024-05-15 20:28:59.165723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.712 [2024-05-15 20:28:59.178641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.712 [2024-05-15 20:28:59.178662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.712 [2024-05-15 20:28:59.178670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.712 [2024-05-15 20:28:59.193106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.712 [2024-05-15 20:28:59.193127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.712 [2024-05-15 20:28:59.193136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.712 [2024-05-15 20:28:59.205274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.712 [2024-05-15 20:28:59.205294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.712 [2024-05-15 20:28:59.205303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.216923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.216947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.216956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.230845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.230866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.230874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.243494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.243515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.243523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.256117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.256138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.256146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.272046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.272066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.272075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.283956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.283977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.283985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.296091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.296112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.296120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.308034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.308056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.308066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.320819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.320839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.320848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.332577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.332597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.332605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.344387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.344407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.344416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.356976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.356996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.357005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.369845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.369866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.369874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.382015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.382035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.382044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.395265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.395285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.395294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.407366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.407387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.407397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.418022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.418042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.418050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.431216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.431236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.431249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.973 [2024-05-15 20:28:59.443459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.973 [2024-05-15 20:28:59.443480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.973 [2024-05-15 20:28:59.443488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.974 [2024-05-15 20:28:59.456039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.974 [2024-05-15 20:28:59.456059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.974 [2024-05-15 20:28:59.456067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:06.974 [2024-05-15 20:28:59.468839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:06.974 [2024-05-15 20:28:59.468859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:06.974 [2024-05-15 20:28:59.468868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.234 [2024-05-15 20:28:59.480764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:07.234 [2024-05-15 20:28:59.480784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.234 [2024-05-15 20:28:59.480793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.234 [2024-05-15 20:28:59.492279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:07.234 [2024-05-15 20:28:59.492300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.234 [2024-05-15 20:28:59.492308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.234 [2024-05-15 20:28:59.504442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:07.234 [2024-05-15 20:28:59.504463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.234 [2024-05-15 20:28:59.504471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.234 [2024-05-15 20:28:59.517691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:07.235 [2024-05-15 20:28:59.517711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.235 [2024-05-15 20:28:59.517720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.235 [2024-05-15 20:28:59.529426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:07.235 [2024-05-15 20:28:59.529447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.235 [2024-05-15 20:28:59.529455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.235 [2024-05-15 20:28:59.540600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:07.235 [2024-05-15 20:28:59.540620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.235 [2024-05-15 20:28:59.540629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.235 [2024-05-15 20:28:59.553275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:07.235 [2024-05-15 20:28:59.553295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.235 [2024-05-15 20:28:59.553304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.235 [2024-05-15 20:28:59.567043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:07.235 [2024-05-15 20:28:59.567064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.235 [2024-05-15 20:28:59.567073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.235 [2024-05-15 20:28:59.578190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:07.235 [2024-05-15 20:28:59.578210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.235 [2024-05-15 20:28:59.578219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.235 [2024-05-15 20:28:59.588963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:07.235 [2024-05-15 20:28:59.588982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.235 [2024-05-15 20:28:59.588991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.235 [2024-05-15 20:28:59.602617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:07.235 [2024-05-15 20:28:59.602637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.235 [2024-05-15 20:28:59.602645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.235 [2024-05-15 20:28:59.614761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:07.235 [2024-05-15 20:28:59.614782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.235 [2024-05-15 20:28:59.614790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.235 [2024-05-15 20:28:59.625623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1358c00) 00:37:07.235 [2024-05-15 20:28:59.625643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:07.235 [2024-05-15 20:28:59.625651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:07.235 00:37:07.235 Latency(us) 00:37:07.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:07.235 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:07.235 nvme0n1 : 2.01 20352.69 79.50 0.00 0.00 6276.48 3112.96 16711.68 00:37:07.235 =================================================================================================================== 00:37:07.235 Total : 20352.69 79.50 0.00 0.00 6276.48 3112.96 16711.68 00:37:07.235 0 00:37:07.235 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:07.235 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:07.235 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:07.235 | .driver_specific 00:37:07.235 | .nvme_error 00:37:07.235 | .status_code 00:37:07.235 | .command_transient_transport_error' 00:37:07.235 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:07.496 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 160 > 0 )) 00:37:07.496 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 310162 00:37:07.496 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 310162 ']' 00:37:07.496 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 310162 00:37:07.496 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:37:07.496 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:07.496 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 310162 00:37:07.496 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:07.496 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:07.496 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 310162' 00:37:07.496 killing process with pid 310162 00:37:07.496 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 310162 00:37:07.496 Received shutdown signal, test time was about 2.000000 seconds 00:37:07.496 00:37:07.496 Latency(us) 00:37:07.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:07.496 =================================================================================================================== 00:37:07.496 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:07.496 20:28:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 310162 00:37:07.756 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:07.756 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:07.756 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:07.756 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:07.756 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:07.756 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=310694 00:37:07.756 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 310694 /var/tmp/bperf.sock 00:37:07.756 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 310694 ']' 00:37:07.756 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:07.756 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:07.756 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:07.756 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:07.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:07.756 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:07.756 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:07.756 [2024-05-15 20:29:00.081421] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:37:07.756 [2024-05-15 20:29:00.081477] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310694 ] 00:37:07.757 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:07.757 Zero copy mechanism will not be used. 00:37:07.757 EAL: No free 2048 kB hugepages reported on node 1 00:37:07.757 [2024-05-15 20:29:00.145586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.757 [2024-05-15 20:29:00.210853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:08.017 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:08.017 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:37:08.017 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:08.017 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:08.017 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:08.017 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.017 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:08.017 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:08.017 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:08.017 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:08.588 nvme0n1 00:37:08.588 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:08.588 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:08.588 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:08.588 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:08.588 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:08.588 20:29:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:08.588 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:08.588 Zero copy mechanism will not be used. 00:37:08.588 Running I/O for 2 seconds... 00:37:08.588 [2024-05-15 20:29:01.014894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.588 [2024-05-15 20:29:01.014930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.588 [2024-05-15 20:29:01.014941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.588 [2024-05-15 20:29:01.027894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.588 [2024-05-15 20:29:01.027920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.588 [2024-05-15 20:29:01.027935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.588 [2024-05-15 20:29:01.039649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.588 [2024-05-15 20:29:01.039672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.588 [2024-05-15 20:29:01.039681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.588 [2024-05-15 20:29:01.052152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.588 [2024-05-15 20:29:01.052174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.588 [2024-05-15 20:29:01.052182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.588 [2024-05-15 20:29:01.065074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.588 [2024-05-15 20:29:01.065095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.588 [2024-05-15 20:29:01.065104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.588 [2024-05-15 20:29:01.078231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.588 [2024-05-15 20:29:01.078252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.588 [2024-05-15 20:29:01.078260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.849 [2024-05-15 20:29:01.091558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.849 [2024-05-15 20:29:01.091579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.849 [2024-05-15 20:29:01.091587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.849 [2024-05-15 20:29:01.104629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.849 [2024-05-15 20:29:01.104651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.849 [2024-05-15 20:29:01.104660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.849 [2024-05-15 20:29:01.116667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.849 [2024-05-15 20:29:01.116688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.849 [2024-05-15 20:29:01.116697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.849 [2024-05-15 20:29:01.130307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.849 [2024-05-15 20:29:01.130334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.849 [2024-05-15 20:29:01.130342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.849 [2024-05-15 20:29:01.143916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.849 [2024-05-15 20:29:01.143941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.849 [2024-05-15 20:29:01.143949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.849 [2024-05-15 20:29:01.156565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.849 [2024-05-15 20:29:01.156585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.849 [2024-05-15 20:29:01.156594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.849 [2024-05-15 20:29:01.169482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.849 [2024-05-15 20:29:01.169503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.849 [2024-05-15 20:29:01.169512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.849 [2024-05-15 20:29:01.182172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.849 [2024-05-15 20:29:01.182193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.849 [2024-05-15 20:29:01.182201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.849 [2024-05-15 20:29:01.194638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.849 [2024-05-15 20:29:01.194660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.850 [2024-05-15 20:29:01.194668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.850 [2024-05-15 20:29:01.206884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.850 [2024-05-15 20:29:01.206905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.850 [2024-05-15 20:29:01.206914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.850 [2024-05-15 20:29:01.218173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.850 [2024-05-15 20:29:01.218196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.850 [2024-05-15 20:29:01.218205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.850 [2024-05-15 20:29:01.230560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.850 [2024-05-15 20:29:01.230581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.850 [2024-05-15 20:29:01.230590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.850 [2024-05-15 20:29:01.243973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.850 [2024-05-15 20:29:01.243995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.850 [2024-05-15 20:29:01.244008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.850 [2024-05-15 20:29:01.257219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.850 [2024-05-15 20:29:01.257240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.850 [2024-05-15 20:29:01.257248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.850 [2024-05-15 20:29:01.269353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.850 [2024-05-15 20:29:01.269374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.850 [2024-05-15 20:29:01.269382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.850 [2024-05-15 20:29:01.282558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.850 [2024-05-15 20:29:01.282579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.850 [2024-05-15 20:29:01.282587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.850 [2024-05-15 20:29:01.296008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.850 [2024-05-15 20:29:01.296029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.850 [2024-05-15 20:29:01.296038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:08.850 [2024-05-15 20:29:01.308953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.850 [2024-05-15 20:29:01.308974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.850 [2024-05-15 20:29:01.308982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:08.850 [2024-05-15 20:29:01.322429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.850 [2024-05-15 20:29:01.322450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.850 [2024-05-15 20:29:01.322458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:08.850 [2024-05-15 20:29:01.334849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.850 [2024-05-15 20:29:01.334871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.850 [2024-05-15 20:29:01.334879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:08.850 [2024-05-15 20:29:01.347573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:08.850 [2024-05-15 20:29:01.347594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:08.850 [2024-05-15 20:29:01.347602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.111 [2024-05-15 20:29:01.359141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.111 [2024-05-15 20:29:01.359167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.111 [2024-05-15 20:29:01.359175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.111 [2024-05-15 20:29:01.371323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.111 [2024-05-15 20:29:01.371345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.111 [2024-05-15 20:29:01.371353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.111 [2024-05-15 20:29:01.385626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.111 [2024-05-15 20:29:01.385647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.111 [2024-05-15 20:29:01.385655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.111 [2024-05-15 20:29:01.400611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.111 [2024-05-15 20:29:01.400632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.111 [2024-05-15 20:29:01.400640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.111 [2024-05-15 20:29:01.412833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.111 [2024-05-15 20:29:01.412855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.111 [2024-05-15 20:29:01.412863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.111 [2024-05-15 20:29:01.425794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.111 [2024-05-15 20:29:01.425816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.111 [2024-05-15 20:29:01.425824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.111 [2024-05-15 20:29:01.438353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.111 [2024-05-15 20:29:01.438375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.111 [2024-05-15 20:29:01.438383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.111 [2024-05-15 20:29:01.451488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.111 [2024-05-15 20:29:01.451510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.111 [2024-05-15 20:29:01.451518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.111 [2024-05-15 20:29:01.464105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.112 [2024-05-15 20:29:01.464126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.112 [2024-05-15 20:29:01.464135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.112 [2024-05-15 20:29:01.476364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.112 [2024-05-15 20:29:01.476385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.112 [2024-05-15 20:29:01.476393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.112 [2024-05-15 20:29:01.488928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.112 [2024-05-15 20:29:01.488950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.112 [2024-05-15 20:29:01.488959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.112 [2024-05-15 20:29:01.501879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.112 [2024-05-15 20:29:01.501901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.112 [2024-05-15 20:29:01.501909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.112 [2024-05-15 20:29:01.513950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.112 [2024-05-15 20:29:01.513972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.112 [2024-05-15 20:29:01.513981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.112 [2024-05-15 20:29:01.526779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.112 [2024-05-15 20:29:01.526801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.112 [2024-05-15 20:29:01.526809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.112 [2024-05-15 20:29:01.540442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.112 [2024-05-15 20:29:01.540463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.112 [2024-05-15 20:29:01.540471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.112 [2024-05-15 20:29:01.554006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.112 [2024-05-15 20:29:01.554027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.112 [2024-05-15 20:29:01.554035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.112 [2024-05-15 20:29:01.566756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.112 [2024-05-15 20:29:01.566777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.112 [2024-05-15 20:29:01.566785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.112 [2024-05-15 20:29:01.577270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.112 [2024-05-15 20:29:01.577291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.112 [2024-05-15 20:29:01.577303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.112 [2024-05-15 20:29:01.589346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.112 [2024-05-15 20:29:01.589367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.112 [2024-05-15 20:29:01.589375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.112 [2024-05-15 20:29:01.601410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.112 [2024-05-15 20:29:01.601432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.112 [2024-05-15 20:29:01.601440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.373 [2024-05-15 20:29:01.615786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.373 [2024-05-15 20:29:01.615807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.615815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.624410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.624431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.624439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.635203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.635225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.635233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.647591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.647612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.647620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.661151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.661172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.661181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.674664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.674685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.674693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.687752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.687777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.687785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.701844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.701865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.701873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.714487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.714509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.714518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.727443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.727465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.727474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.740212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.740233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.740241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.751738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.751760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.751768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.766277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.766299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.766308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.779241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.779263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.779271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.792526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.792547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.792559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.806228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.806250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.806259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.818993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.819015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.819023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.831601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.831623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.831632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.846001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.846024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.846034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.858548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.858570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.858578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.374 [2024-05-15 20:29:01.870157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.374 [2024-05-15 20:29:01.870179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.374 [2024-05-15 20:29:01.870187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:01.882749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:01.882771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:01.882779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:01.895055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:01.895076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:01.895085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:01.908395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:01.908420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:01.908428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:01.921842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:01.921864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:01.921872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:01.933800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:01.933822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:01.933831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:01.946834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:01.946856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:01.946864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:01.960528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:01.960549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:01.960558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:01.973419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:01.973440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:01.973448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:01.986971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:01.986992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:01.987000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:02.000290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:02.000318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:02.000327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:02.013283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:02.013304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:02.013319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:02.025618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:02.025639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:02.025648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:02.038639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:02.038660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:02.038669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:02.052187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:02.052210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:02.052219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:02.063766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:02.063787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:02.063796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:02.075311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:02.075337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:02.075346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:02.088376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:02.088398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:02.088407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:02.101989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:02.102010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:02.102018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:02.114217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:02.114239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:02.114247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.635 [2024-05-15 20:29:02.127808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.635 [2024-05-15 20:29:02.127829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.635 [2024-05-15 20:29:02.127841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.896 [2024-05-15 20:29:02.142912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.896 [2024-05-15 20:29:02.142933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.896 [2024-05-15 20:29:02.142942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.896 [2024-05-15 20:29:02.157639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.896 [2024-05-15 20:29:02.157660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.896 [2024-05-15 20:29:02.157669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.896 [2024-05-15 20:29:02.172304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.896 [2024-05-15 20:29:02.172330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.896 [2024-05-15 20:29:02.172338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.896 [2024-05-15 20:29:02.186519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.896 [2024-05-15 20:29:02.186540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.896 [2024-05-15 20:29:02.186548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.896 [2024-05-15 20:29:02.201344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.896 [2024-05-15 20:29:02.201364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.896 [2024-05-15 20:29:02.201373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.896 [2024-05-15 20:29:02.216131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.896 [2024-05-15 20:29:02.216152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.896 [2024-05-15 20:29:02.216160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.896 [2024-05-15 20:29:02.228994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.896 [2024-05-15 20:29:02.229014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.896 [2024-05-15 20:29:02.229022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.896 [2024-05-15 20:29:02.243259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.896 [2024-05-15 20:29:02.243280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.896 [2024-05-15 20:29:02.243288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.897 [2024-05-15 20:29:02.256501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.897 [2024-05-15 20:29:02.256523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.897 [2024-05-15 20:29:02.256533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.897 [2024-05-15 20:29:02.269228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.897 [2024-05-15 20:29:02.269249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.897 [2024-05-15 20:29:02.269257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.897 [2024-05-15 20:29:02.282120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.897 [2024-05-15 20:29:02.282142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.897 [2024-05-15 20:29:02.282150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.897 [2024-05-15 20:29:02.293866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.897 [2024-05-15 20:29:02.293887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.897 [2024-05-15 20:29:02.293895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.897 [2024-05-15 20:29:02.307886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.897 [2024-05-15 20:29:02.307907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.897 [2024-05-15 20:29:02.307915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.897 [2024-05-15 20:29:02.321650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.897 [2024-05-15 20:29:02.321671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.897 [2024-05-15 20:29:02.321679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.897 [2024-05-15 20:29:02.334215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.897 [2024-05-15 20:29:02.334235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.897 [2024-05-15 20:29:02.334244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:09.897 [2024-05-15 20:29:02.346272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.897 [2024-05-15 20:29:02.346293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.897 [2024-05-15 20:29:02.346301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:09.897 [2024-05-15 20:29:02.360590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.897 [2024-05-15 20:29:02.360610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.897 [2024-05-15 20:29:02.360622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:09.897 [2024-05-15 20:29:02.374115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.897 [2024-05-15 20:29:02.374137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.897 [2024-05-15 20:29:02.374145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:09.897 [2024-05-15 20:29:02.387106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:09.897 [2024-05-15 20:29:02.387127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:09.897 [2024-05-15 20:29:02.387136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:10.157 [2024-05-15 20:29:02.402194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.402215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.402223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.416833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.416854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.416863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.430735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.430755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.430764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.446109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.446130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.446139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.459064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.459085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.459094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.473444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.473465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.473474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.488061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.488086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.488094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.502272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.502294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.502302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.515189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.515211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.515219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.529008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.529029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.529037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.542999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.543021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.543029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.556573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.556595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.556604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.570417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.570439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.570447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.582190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.582212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.582220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.594360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.594383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.594391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.608856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.608877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.608885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.622877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.622899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.622907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.635382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.635404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.635413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:10.158 [2024-05-15 20:29:02.648792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.158 [2024-05-15 20:29:02.648814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.158 [2024-05-15 20:29:02.648822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:10.419 [2024-05-15 20:29:02.662516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.419 [2024-05-15 20:29:02.662537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.419 [2024-05-15 20:29:02.662545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.676302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.676328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.676337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.687998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.688020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.688028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.700806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.700828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.700836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.712759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.712781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.712793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.724132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.724154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.724162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.737236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.737259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.737267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.751064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.751086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.751094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.766139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.766160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.766169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.780247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.780268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.780277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.791479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.791500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.791508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.799658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.799680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.799688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.808226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.808247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.808255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.819321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.819346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.819354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.830618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.830640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.830648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.842289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.842311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.842324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.854646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.854668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.854676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.867798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.867821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.867829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.881436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.881459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.881468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.895662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.895684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.895692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:10.420 [2024-05-15 20:29:02.909100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.420 [2024-05-15 20:29:02.909121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.420 [2024-05-15 20:29:02.909130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:10.680 [2024-05-15 20:29:02.921975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.680 [2024-05-15 20:29:02.921997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.680 [2024-05-15 20:29:02.922005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:10.680 [2024-05-15 20:29:02.934960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.680 [2024-05-15 20:29:02.934982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.680 [2024-05-15 20:29:02.934990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:10.680 [2024-05-15 20:29:02.949334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.680 [2024-05-15 20:29:02.949355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.680 [2024-05-15 20:29:02.949363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:10.680 [2024-05-15 20:29:02.962645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.680 [2024-05-15 20:29:02.962667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.680 [2024-05-15 20:29:02.962676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:10.680 [2024-05-15 20:29:02.974780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.680 [2024-05-15 20:29:02.974802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.680 [2024-05-15 20:29:02.974810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:10.680 [2024-05-15 20:29:02.988461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.680 [2024-05-15 20:29:02.988483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.680 [2024-05-15 20:29:02.988491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:10.680 [2024-05-15 20:29:03.000786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2456d70) 00:37:10.680 [2024-05-15 20:29:03.000808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:10.680 [2024-05-15 20:29:03.000816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:10.680 00:37:10.680 Latency(us) 00:37:10.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:10.680 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:10.680 nvme0n1 : 2.00 2396.83 299.60 0.00 0.00 6672.02 1515.52 15837.87 00:37:10.680 =================================================================================================================== 00:37:10.680 Total : 2396.83 299.60 0.00 0.00 6672.02 1515.52 15837.87 00:37:10.680 0 00:37:10.680 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:10.680 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:10.680 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:10.680 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:10.680 | .driver_specific 00:37:10.680 | .nvme_error 00:37:10.680 | .status_code 00:37:10.680 | .command_transient_transport_error' 00:37:10.940 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 154 > 0 )) 00:37:10.940 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 310694 00:37:10.940 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 310694 ']' 00:37:10.940 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 310694 00:37:10.940 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 310694 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 310694' 00:37:10.941 killing process with pid 310694 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 310694 00:37:10.941 Received shutdown signal, test time was about 2.000000 seconds 00:37:10.941 00:37:10.941 Latency(us) 00:37:10.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:10.941 =================================================================================================================== 00:37:10.941 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 310694 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=311299 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 311299 /var/tmp/bperf.sock 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 311299 ']' 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:10.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:10.941 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:11.201 [2024-05-15 20:29:03.469651] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:37:11.201 [2024-05-15 20:29:03.469706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311299 ] 00:37:11.201 EAL: No free 2048 kB hugepages reported on node 1 00:37:11.201 [2024-05-15 20:29:03.533634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.201 [2024-05-15 20:29:03.597113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:11.201 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:11.201 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:37:11.201 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:11.201 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:11.462 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:11.462 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:11.462 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:11.462 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:11.462 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:11.462 20:29:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:12.032 nvme0n1 00:37:12.032 20:29:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:12.032 20:29:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.032 20:29:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:12.032 20:29:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.032 20:29:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:12.032 20:29:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:12.032 Running I/O for 2 seconds... 00:37:12.032 [2024-05-15 20:29:04.417050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.032 [2024-05-15 20:29:04.417459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.032 [2024-05-15 20:29:04.417494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.032 [2024-05-15 20:29:04.429262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.032 [2024-05-15 20:29:04.429733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.032 [2024-05-15 20:29:04.429757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.032 [2024-05-15 20:29:04.441511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.032 [2024-05-15 20:29:04.441888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.032 [2024-05-15 20:29:04.441909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.032 [2024-05-15 20:29:04.453624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.032 [2024-05-15 20:29:04.454011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.032 [2024-05-15 20:29:04.454032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.032 [2024-05-15 20:29:04.465760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.032 [2024-05-15 20:29:04.466137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.033 [2024-05-15 20:29:04.466157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.033 [2024-05-15 20:29:04.477862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.033 [2024-05-15 20:29:04.478141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.033 [2024-05-15 20:29:04.478162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.033 [2024-05-15 20:29:04.489928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.033 [2024-05-15 20:29:04.490353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.033 [2024-05-15 20:29:04.490373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.033 [2024-05-15 20:29:04.502036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.033 [2024-05-15 20:29:04.502471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.033 [2024-05-15 20:29:04.502490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.033 [2024-05-15 20:29:04.514129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.033 [2024-05-15 20:29:04.514565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.033 [2024-05-15 20:29:04.514584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.033 [2024-05-15 20:29:04.526183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.033 [2024-05-15 20:29:04.526589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.033 [2024-05-15 20:29:04.526609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.293 [2024-05-15 20:29:04.538265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.293 [2024-05-15 20:29:04.538685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.293 [2024-05-15 20:29:04.538704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.293 [2024-05-15 20:29:04.550301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.293 [2024-05-15 20:29:04.550807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.293 [2024-05-15 20:29:04.550826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.562372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.562753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.562777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.574450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.574868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.574888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.586511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.586895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.586914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.598534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.598838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.598860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.610809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.611235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.611255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.622819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.623202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.623223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.634866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.635300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.635326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.646903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.647276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.647296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.658966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.659251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.659271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.671000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.671357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.671377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.683054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.683474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.683494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.695085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.695472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.695491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.707135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.707502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.707526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.719161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.719537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.719556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.731180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.731630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.731649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.743229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.743650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.743670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.755236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.755630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.755650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.767301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.767660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.767679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.779465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.779890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.779914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.294 [2024-05-15 20:29:04.791492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.294 [2024-05-15 20:29:04.791873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.294 [2024-05-15 20:29:04.791893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.803547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.803910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.803933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.815560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.815937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.815956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.827652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.827942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.827961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.839677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.840039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.840062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.851723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.852018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.852037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.863756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.864117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.864140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.875787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.876166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.876188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.887810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.888158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.888181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.899855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.900278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.900297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.911874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.912231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.912255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.923912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.924385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.924405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.935935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.936362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.936382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.947974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.948401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.948421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.960011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.960447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.960467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.972075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.972458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.972477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.984125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.984520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.984540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:04.996176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:04.996567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:04.996586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:05.008203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:05.008667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:05.008687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:05.020253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:05.020696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:05.020716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:05.032306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:05.032680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:05.032699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.556 [2024-05-15 20:29:05.044368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.556 [2024-05-15 20:29:05.044655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.556 [2024-05-15 20:29:05.044674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.817 [2024-05-15 20:29:05.056495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.817 [2024-05-15 20:29:05.056913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.817 [2024-05-15 20:29:05.056932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.817 [2024-05-15 20:29:05.068567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.817 [2024-05-15 20:29:05.068995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.817 [2024-05-15 20:29:05.069015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.817 [2024-05-15 20:29:05.080605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.817 [2024-05-15 20:29:05.080894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.817 [2024-05-15 20:29:05.080915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.817 [2024-05-15 20:29:05.092678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.817 [2024-05-15 20:29:05.093124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.817 [2024-05-15 20:29:05.093143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.817 [2024-05-15 20:29:05.104717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.817 [2024-05-15 20:29:05.105107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.817 [2024-05-15 20:29:05.105127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.817 [2024-05-15 20:29:05.116744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.817 [2024-05-15 20:29:05.117101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.817 [2024-05-15 20:29:05.117121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.817 [2024-05-15 20:29:05.128797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.817 [2024-05-15 20:29:05.129242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.817 [2024-05-15 20:29:05.129262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.817 [2024-05-15 20:29:05.140838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.817 [2024-05-15 20:29:05.141278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.817 [2024-05-15 20:29:05.141297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.817 [2024-05-15 20:29:05.152871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.818 [2024-05-15 20:29:05.153283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.818 [2024-05-15 20:29:05.153302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.818 [2024-05-15 20:29:05.164922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.818 [2024-05-15 20:29:05.165204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.818 [2024-05-15 20:29:05.165223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.818 [2024-05-15 20:29:05.176970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.818 [2024-05-15 20:29:05.177360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.818 [2024-05-15 20:29:05.177379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.818 [2024-05-15 20:29:05.189035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.818 [2024-05-15 20:29:05.189318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.818 [2024-05-15 20:29:05.189338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.818 [2024-05-15 20:29:05.201063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.818 [2024-05-15 20:29:05.201493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.818 [2024-05-15 20:29:05.201512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.818 [2024-05-15 20:29:05.213110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.818 [2024-05-15 20:29:05.213561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.818 [2024-05-15 20:29:05.213581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.818 [2024-05-15 20:29:05.225120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.818 [2024-05-15 20:29:05.225553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.818 [2024-05-15 20:29:05.225573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.818 [2024-05-15 20:29:05.237193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.818 [2024-05-15 20:29:05.237653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.818 [2024-05-15 20:29:05.237672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.818 [2024-05-15 20:29:05.249203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.818 [2024-05-15 20:29:05.249583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.818 [2024-05-15 20:29:05.249604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.818 [2024-05-15 20:29:05.261233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.818 [2024-05-15 20:29:05.261615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.818 [2024-05-15 20:29:05.261635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.818 [2024-05-15 20:29:05.273291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.818 [2024-05-15 20:29:05.273692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.818 [2024-05-15 20:29:05.273712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.818 [2024-05-15 20:29:05.285368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.818 [2024-05-15 20:29:05.285757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.818 [2024-05-15 20:29:05.285777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.818 [2024-05-15 20:29:05.297397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.818 [2024-05-15 20:29:05.297772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.818 [2024-05-15 20:29:05.297795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:12.818 [2024-05-15 20:29:05.309453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:12.818 [2024-05-15 20:29:05.309845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:12.818 [2024-05-15 20:29:05.309865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.321450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.321837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.321856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.333570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.333970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.333990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.345618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.346044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.346064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.357660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.358073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.358093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.369699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.370083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.370107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.381769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.382159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.382179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.393769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.394230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.394249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.405840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.406209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.406233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.417876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.418153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.418172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.429927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.430195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.430213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.441970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.442427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.442447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.453995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.454462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.454482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.466124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.466557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.466576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.478202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.478660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.478680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.490240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.490670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.490690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.502300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.502724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.502743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.514349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.514772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.514791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.079 [2024-05-15 20:29:05.526390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.079 [2024-05-15 20:29:05.526674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.079 [2024-05-15 20:29:05.526693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.080 [2024-05-15 20:29:05.538433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.080 [2024-05-15 20:29:05.538847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.080 [2024-05-15 20:29:05.538867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.080 [2024-05-15 20:29:05.550489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.080 [2024-05-15 20:29:05.550952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.080 [2024-05-15 20:29:05.550974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.080 [2024-05-15 20:29:05.562527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.080 [2024-05-15 20:29:05.562916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.080 [2024-05-15 20:29:05.562936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.080 [2024-05-15 20:29:05.574602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.080 [2024-05-15 20:29:05.575012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.080 [2024-05-15 20:29:05.575032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.340 [2024-05-15 20:29:05.586627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.340 [2024-05-15 20:29:05.587005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.340 [2024-05-15 20:29:05.587024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.340 [2024-05-15 20:29:05.598682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.340 [2024-05-15 20:29:05.599066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.340 [2024-05-15 20:29:05.599085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.340 [2024-05-15 20:29:05.610910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.340 [2024-05-15 20:29:05.611349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.340 [2024-05-15 20:29:05.611373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.340 [2024-05-15 20:29:05.622988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.340 [2024-05-15 20:29:05.623367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.340 [2024-05-15 20:29:05.623387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.340 [2024-05-15 20:29:05.635038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.340 [2024-05-15 20:29:05.635480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.340 [2024-05-15 20:29:05.635500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.340 [2024-05-15 20:29:05.647090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.340 [2024-05-15 20:29:05.647540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.340 [2024-05-15 20:29:05.647561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.340 [2024-05-15 20:29:05.659116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.340 [2024-05-15 20:29:05.659508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.340 [2024-05-15 20:29:05.659530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.341 [2024-05-15 20:29:05.671177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.341 [2024-05-15 20:29:05.671642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.341 [2024-05-15 20:29:05.671662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.341 [2024-05-15 20:29:05.683222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.341 [2024-05-15 20:29:05.683620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.341 [2024-05-15 20:29:05.683640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.341 [2024-05-15 20:29:05.695289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.341 [2024-05-15 20:29:05.695745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.341 [2024-05-15 20:29:05.695765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.341 [2024-05-15 20:29:05.707329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.341 [2024-05-15 20:29:05.707757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.341 [2024-05-15 20:29:05.707777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.341 [2024-05-15 20:29:05.719363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.341 [2024-05-15 20:29:05.719739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.341 [2024-05-15 20:29:05.719763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.341 [2024-05-15 20:29:05.731396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.341 [2024-05-15 20:29:05.731687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.341 [2024-05-15 20:29:05.731706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.341 [2024-05-15 20:29:05.743462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.341 [2024-05-15 20:29:05.743839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.341 [2024-05-15 20:29:05.743864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.341 [2024-05-15 20:29:05.755489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.341 [2024-05-15 20:29:05.755918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.341 [2024-05-15 20:29:05.755937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.341 [2024-05-15 20:29:05.767560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.341 [2024-05-15 20:29:05.767954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.341 [2024-05-15 20:29:05.767973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.341 [2024-05-15 20:29:05.779662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.341 [2024-05-15 20:29:05.780094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.341 [2024-05-15 20:29:05.780114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.341 [2024-05-15 20:29:05.791664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.341 [2024-05-15 20:29:05.792048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.341 [2024-05-15 20:29:05.792067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.341 [2024-05-15 20:29:05.803723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.341 [2024-05-15 20:29:05.804175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.341 [2024-05-15 20:29:05.804195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.341 [2024-05-15 20:29:05.815736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.341 [2024-05-15 20:29:05.816199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.341 [2024-05-15 20:29:05.816218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.341 [2024-05-15 20:29:05.827797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.341 [2024-05-15 20:29:05.828246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.341 [2024-05-15 20:29:05.828266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.341 [2024-05-15 20:29:05.839817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.341 [2024-05-15 20:29:05.840288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.341 [2024-05-15 20:29:05.840307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.602 [2024-05-15 20:29:05.851856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.602 [2024-05-15 20:29:05.852275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.602 [2024-05-15 20:29:05.852293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.602 [2024-05-15 20:29:05.863866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.602 [2024-05-15 20:29:05.864296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.602 [2024-05-15 20:29:05.864321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.602 [2024-05-15 20:29:05.875906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.602 [2024-05-15 20:29:05.876299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.602 [2024-05-15 20:29:05.876329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.602 [2024-05-15 20:29:05.887973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.602 [2024-05-15 20:29:05.888405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.602 [2024-05-15 20:29:05.888424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.602 [2024-05-15 20:29:05.900025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.602 [2024-05-15 20:29:05.900444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.602 [2024-05-15 20:29:05.900464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.602 [2024-05-15 20:29:05.912058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.602 [2024-05-15 20:29:05.912446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.602 [2024-05-15 20:29:05.912465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.602 [2024-05-15 20:29:05.924073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.602 [2024-05-15 20:29:05.924498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.602 [2024-05-15 20:29:05.924518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.602 [2024-05-15 20:29:05.936119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.602 [2024-05-15 20:29:05.936541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.602 [2024-05-15 20:29:05.936560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.602 [2024-05-15 20:29:05.948160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.602 [2024-05-15 20:29:05.948583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.602 [2024-05-15 20:29:05.948602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.602 [2024-05-15 20:29:05.960227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.602 [2024-05-15 20:29:05.960692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.602 [2024-05-15 20:29:05.960712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.602 [2024-05-15 20:29:05.972243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.602 [2024-05-15 20:29:05.972630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.603 [2024-05-15 20:29:05.972654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.603 [2024-05-15 20:29:05.984283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.603 [2024-05-15 20:29:05.984702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.603 [2024-05-15 20:29:05.984722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.603 [2024-05-15 20:29:05.996331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.603 [2024-05-15 20:29:05.996768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.603 [2024-05-15 20:29:05.996787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.603 [2024-05-15 20:29:06.008380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.603 [2024-05-15 20:29:06.008818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.603 [2024-05-15 20:29:06.008838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.603 [2024-05-15 20:29:06.020411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.603 [2024-05-15 20:29:06.020904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.603 [2024-05-15 20:29:06.020923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.603 [2024-05-15 20:29:06.032446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.603 [2024-05-15 20:29:06.032865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.603 [2024-05-15 20:29:06.032891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.603 [2024-05-15 20:29:06.044446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.603 [2024-05-15 20:29:06.044815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.603 [2024-05-15 20:29:06.044839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.603 [2024-05-15 20:29:06.056476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.603 [2024-05-15 20:29:06.056923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.603 [2024-05-15 20:29:06.056943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.603 [2024-05-15 20:29:06.068498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.603 [2024-05-15 20:29:06.068958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.603 [2024-05-15 20:29:06.068978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.603 [2024-05-15 20:29:06.080540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.603 [2024-05-15 20:29:06.080984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.603 [2024-05-15 20:29:06.081003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.603 [2024-05-15 20:29:06.092573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.603 [2024-05-15 20:29:06.092945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.603 [2024-05-15 20:29:06.092964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.863 [2024-05-15 20:29:06.104632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.105078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.105098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.116668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.117093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.117113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.128693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.129152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.129171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.140759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.141190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.141210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.152827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.153278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.153298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.164848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.165276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.165296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.176890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.177338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.177358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.188927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.189316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.189336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.200982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.201437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.201456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.213035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.213285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.213305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.225078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.225495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.225515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.237094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.237510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.237530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.249150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.249573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.249593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.261176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.261627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.261646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.273215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.273695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.273714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.285235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.285675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.285694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.297282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.297709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.297728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.309304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.309675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.309694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.321347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.321785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.321804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.333369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.333814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.333833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.345431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.345854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.345877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:13.864 [2024-05-15 20:29:06.357443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:13.864 [2024-05-15 20:29:06.357891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:13.864 [2024-05-15 20:29:06.357910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.125 [2024-05-15 20:29:06.369480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:14.125 [2024-05-15 20:29:06.369938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.125 [2024-05-15 20:29:06.369957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.125 [2024-05-15 20:29:06.381493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:14.125 [2024-05-15 20:29:06.381961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.125 [2024-05-15 20:29:06.381981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.125 [2024-05-15 20:29:06.393562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:14.125 [2024-05-15 20:29:06.393974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.125 [2024-05-15 20:29:06.393994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.125 [2024-05-15 20:29:06.405598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3850) with pdu=0x2000190f8618 00:37:14.125 [2024-05-15 20:29:06.406045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:14.125 [2024-05-15 20:29:06.406065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.125 00:37:14.125 Latency(us) 00:37:14.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:14.125 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:14.125 nvme0n1 : 2.01 21176.63 82.72 0.00 0.00 6030.99 3194.88 12397.23 00:37:14.125 =================================================================================================================== 00:37:14.125 Total : 21176.63 82.72 0.00 0.00 6030.99 3194.88 12397.23 00:37:14.125 0 00:37:14.125 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:14.125 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:14.125 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:14.125 | .driver_specific 00:37:14.125 | .nvme_error 00:37:14.125 | .status_code 00:37:14.125 | .command_transient_transport_error' 00:37:14.125 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 166 > 0 )) 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 311299 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 311299 ']' 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 311299 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 311299 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 311299' 00:37:14.386 killing process with pid 311299 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 311299 00:37:14.386 Received shutdown signal, test time was about 2.000000 seconds 00:37:14.386 00:37:14.386 Latency(us) 00:37:14.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:14.386 =================================================================================================================== 00:37:14.386 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 311299 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=311900 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 311900 /var/tmp/bperf.sock 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 311900 ']' 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:14.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:14.386 20:29:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:14.386 [2024-05-15 20:29:06.874697] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:37:14.386 [2024-05-15 20:29:06.874749] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311900 ] 00:37:14.386 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:14.386 Zero copy mechanism will not be used. 00:37:14.648 EAL: No free 2048 kB hugepages reported on node 1 00:37:14.648 [2024-05-15 20:29:06.940532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:14.648 [2024-05-15 20:29:07.004222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.648 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:14.648 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:37:14.648 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:14.648 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:14.909 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:14.909 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:14.909 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:14.909 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:14.909 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:14.909 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:15.169 nvme0n1 00:37:15.169 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:15.169 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.169 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:15.169 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.169 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:15.169 20:29:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:15.430 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:15.430 Zero copy mechanism will not be used. 00:37:15.430 Running I/O for 2 seconds... 00:37:15.430 [2024-05-15 20:29:07.698469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.430 [2024-05-15 20:29:07.698786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.430 [2024-05-15 20:29:07.698820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.430 [2024-05-15 20:29:07.713145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.430 [2024-05-15 20:29:07.713554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.430 [2024-05-15 20:29:07.713579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.430 [2024-05-15 20:29:07.725179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.430 [2024-05-15 20:29:07.725577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.430 [2024-05-15 20:29:07.725599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.430 [2024-05-15 20:29:07.737582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.430 [2024-05-15 20:29:07.737967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.430 [2024-05-15 20:29:07.737989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.430 [2024-05-15 20:29:07.750013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.430 [2024-05-15 20:29:07.750120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.430 [2024-05-15 20:29:07.750138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.430 [2024-05-15 20:29:07.761280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.761461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.761480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.431 [2024-05-15 20:29:07.773223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.773562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.773584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.431 [2024-05-15 20:29:07.784179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.784297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.784323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.431 [2024-05-15 20:29:07.795167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.795708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.795728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.431 [2024-05-15 20:29:07.805735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.805852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.805870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.431 [2024-05-15 20:29:07.816602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.816994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.817014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.431 [2024-05-15 20:29:07.827684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.828207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.828227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.431 [2024-05-15 20:29:07.838427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.838963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.838984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.431 [2024-05-15 20:29:07.849584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.850107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.850128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.431 [2024-05-15 20:29:07.860331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.860715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.860736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.431 [2024-05-15 20:29:07.871377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.871753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.871774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.431 [2024-05-15 20:29:07.881979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.882475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.882495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.431 [2024-05-15 20:29:07.892577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.892765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.892783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.431 [2024-05-15 20:29:07.903534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.903882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.903902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.431 [2024-05-15 20:29:07.914342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.914756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.914776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.431 [2024-05-15 20:29:07.924951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.431 [2024-05-15 20:29:07.925309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.431 [2024-05-15 20:29:07.925335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:07.937027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:07.937555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:07.937580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:07.949254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:07.949776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:07.949796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:07.958745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:07.959159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:07.959179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:07.967930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:07.968203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:07.968223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:07.977472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:07.977846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:07.977866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:07.987283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:07.987694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:07.987714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:07.997751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:07.998130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:07.998149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.008078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.008458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.008478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.018215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.018704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.018724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.027752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.027993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.028012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.037278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.037557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.037578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.045081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.045196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.045215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.055410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.055935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.055956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.064658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.064917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.064937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.074666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.075022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.075042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.083367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.083735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.083756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.093051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.093490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.093510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.102952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.103389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.103410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.112839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.113256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.113275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.123291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.123746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.123766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.134711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.135096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.135116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.146175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.146715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.146734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.156925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.157006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.157023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.168744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.168863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.168881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.180367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.180644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.180665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.694 [2024-05-15 20:29:08.190307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.694 [2024-05-15 20:29:08.190693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.694 [2024-05-15 20:29:08.190713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.956 [2024-05-15 20:29:08.200951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.956 [2024-05-15 20:29:08.201139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.956 [2024-05-15 20:29:08.201160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.956 [2024-05-15 20:29:08.211118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.956 [2024-05-15 20:29:08.211675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.956 [2024-05-15 20:29:08.211695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.956 [2024-05-15 20:29:08.220178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.956 [2024-05-15 20:29:08.220529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.956 [2024-05-15 20:29:08.220549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.956 [2024-05-15 20:29:08.231225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.956 [2024-05-15 20:29:08.231691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.956 [2024-05-15 20:29:08.231712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.956 [2024-05-15 20:29:08.239140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.956 [2024-05-15 20:29:08.239405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.956 [2024-05-15 20:29:08.239425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.956 [2024-05-15 20:29:08.244954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.956 [2024-05-15 20:29:08.245200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.956 [2024-05-15 20:29:08.245219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.956 [2024-05-15 20:29:08.251893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.252140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.252159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.258491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.258777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.258796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.265369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.265818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.265837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.272281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.272702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.272723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.279177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.279490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.279510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.284983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.285233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.285252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.292265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.292546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.292566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.299781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.300026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.300047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.307433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.307682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.307702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.314711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.314955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.314974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.320698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.320946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.320965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.328473] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.328994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.329014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.337517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.337995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.338014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.345562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.345804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.345823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.352627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.352955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.352975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.359964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.360241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.360261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.368408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.368961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.368982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.377931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.378240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.378260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.386757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.387188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.387208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.396132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.396744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.396765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.405896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.406194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.406217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.417020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.417453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.417473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.428114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.428807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.428828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.439893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.440491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.440512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.957 [2024-05-15 20:29:08.451921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:15.957 [2024-05-15 20:29:08.452431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.957 [2024-05-15 20:29:08.452450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.219 [2024-05-15 20:29:08.463309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.219 [2024-05-15 20:29:08.463712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.219 [2024-05-15 20:29:08.463731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.219 [2024-05-15 20:29:08.474164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.219 [2024-05-15 20:29:08.474749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.219 [2024-05-15 20:29:08.474770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.219 [2024-05-15 20:29:08.485712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.219 [2024-05-15 20:29:08.486231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.219 [2024-05-15 20:29:08.486251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.219 [2024-05-15 20:29:08.495006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.219 [2024-05-15 20:29:08.495480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.219 [2024-05-15 20:29:08.495500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.219 [2024-05-15 20:29:08.503766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.219 [2024-05-15 20:29:08.504129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.219 [2024-05-15 20:29:08.504149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.219 [2024-05-15 20:29:08.513418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.219 [2024-05-15 20:29:08.513803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.219 [2024-05-15 20:29:08.513822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.219 [2024-05-15 20:29:08.523001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.219 [2024-05-15 20:29:08.523386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.219 [2024-05-15 20:29:08.523406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.219 [2024-05-15 20:29:08.532536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.219 [2024-05-15 20:29:08.532902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.219 [2024-05-15 20:29:08.532922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.219 [2024-05-15 20:29:08.541102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.219 [2024-05-15 20:29:08.541408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.219 [2024-05-15 20:29:08.541428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.219 [2024-05-15 20:29:08.547429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.219 [2024-05-15 20:29:08.547794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.219 [2024-05-15 20:29:08.547814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.219 [2024-05-15 20:29:08.554331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.219 [2024-05-15 20:29:08.554575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.219 [2024-05-15 20:29:08.554602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.219 [2024-05-15 20:29:08.560451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.219 [2024-05-15 20:29:08.560715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.219 [2024-05-15 20:29:08.560735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.219 [2024-05-15 20:29:08.565989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.219 [2024-05-15 20:29:08.566193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.566216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.571828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.572040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.572059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.577446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.577673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.577700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.582834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.583045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.583065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.591648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.591908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.591928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.599914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.600414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.600442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.606417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.606626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.606645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.611673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.611944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.611963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.619437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.619641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.619659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.624528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.624837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.624857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.629739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.630008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.630028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.634382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.634588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.634607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.639637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.639995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.640015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.646536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.646743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.646762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.652295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.652617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.652638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.658183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.658480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.658500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.663183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.663393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.663412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.670577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.670922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.670942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.676261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.676531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.676551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.682000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.682207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.682226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.687256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.687462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.687481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.693060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.693275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.693294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.698285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.698501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.698520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.705815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.706077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.706096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.220 [2024-05-15 20:29:08.712552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.220 [2024-05-15 20:29:08.712797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.220 [2024-05-15 20:29:08.712817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.720881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.721220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.721240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.726753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.726959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.726982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.732747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.732973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.732993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.741738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.741973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.741993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.749181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.749443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.749462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.758128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.758338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.758358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.765159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.765368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.765388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.772723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.773186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.773208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.780797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.781002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.781021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.787749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.788006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.788035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.796332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.796687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.796707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.805952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.806334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.806355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.813905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.814136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.814156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.823443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.823940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.823961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.833071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.833470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.833491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.842071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.842372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.842393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.851650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.851970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.851990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.861351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.861740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.861760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.868366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.868621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.868640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.876235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.876587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.876608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.885300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.885650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.885670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.891952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.892156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.892175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.896794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.896998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.897018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.901680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.901889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.901908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.906855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.907079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.907098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.913761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.914045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.914064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.919665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.919925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.919945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.925601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.925883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.925907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.933660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.934133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.934154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.943119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.943511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.943532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.953136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.953470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.953491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.962419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.962675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.962693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.482 [2024-05-15 20:29:08.972863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.482 [2024-05-15 20:29:08.973139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.482 [2024-05-15 20:29:08.973164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.745 [2024-05-15 20:29:08.984463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.745 [2024-05-15 20:29:08.984954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.745 [2024-05-15 20:29:08.984974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:08.995723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:08.996088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:08.996108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.004564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.004768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.004787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.011830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.012047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.012066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.018394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.018671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.018691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.027172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.027528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.027549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.034440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.034660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.034679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.040061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.040322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.040342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.045908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.046228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.046248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.051124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.051337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.051356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.059191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.059403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.059422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.068877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.069275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.069299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.076397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.076830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.076850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.084736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.084942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.084961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.090660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.090880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.090899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.095985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.096270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.096290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.102000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.102208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.102227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.107858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.108110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.108136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.116861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.117120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.117147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.124925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.125336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.125355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.134033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.134392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.134413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.141866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.142372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.142394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.149595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.149806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.149825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.157543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.158026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.158048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.166087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.166399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.166419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.174531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.174947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.174968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.184328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.184778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.184800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.194248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.746 [2024-05-15 20:29:09.194608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.746 [2024-05-15 20:29:09.194629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.746 [2024-05-15 20:29:09.204858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.747 [2024-05-15 20:29:09.205111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.747 [2024-05-15 20:29:09.205131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:16.747 [2024-05-15 20:29:09.212402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.747 [2024-05-15 20:29:09.212754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.747 [2024-05-15 20:29:09.212774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:16.747 [2024-05-15 20:29:09.221511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.747 [2024-05-15 20:29:09.221999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.747 [2024-05-15 20:29:09.222020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:16.747 [2024-05-15 20:29:09.231203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.747 [2024-05-15 20:29:09.231592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.747 [2024-05-15 20:29:09.231612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:16.747 [2024-05-15 20:29:09.243131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:16.747 [2024-05-15 20:29:09.243464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:16.747 [2024-05-15 20:29:09.243483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:17.009 [2024-05-15 20:29:09.251632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.009 [2024-05-15 20:29:09.252010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.009 [2024-05-15 20:29:09.252029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:17.009 [2024-05-15 20:29:09.259874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.009 [2024-05-15 20:29:09.260177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.009 [2024-05-15 20:29:09.260197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:17.009 [2024-05-15 20:29:09.268252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.009 [2024-05-15 20:29:09.268665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.009 [2024-05-15 20:29:09.268684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.009 [2024-05-15 20:29:09.277236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.009 [2024-05-15 20:29:09.277501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.009 [2024-05-15 20:29:09.277521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:17.009 [2024-05-15 20:29:09.286092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.009 [2024-05-15 20:29:09.286442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.009 [2024-05-15 20:29:09.286465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:17.009 [2024-05-15 20:29:09.296257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.009 [2024-05-15 20:29:09.296572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.009 [2024-05-15 20:29:09.296591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:17.009 [2024-05-15 20:29:09.305535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.009 [2024-05-15 20:29:09.305847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.009 [2024-05-15 20:29:09.305868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.009 [2024-05-15 20:29:09.314817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.009 [2024-05-15 20:29:09.315147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.009 [2024-05-15 20:29:09.315167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:17.009 [2024-05-15 20:29:09.322723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.009 [2024-05-15 20:29:09.323107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.009 [2024-05-15 20:29:09.323127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:17.009 [2024-05-15 20:29:09.332879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.009 [2024-05-15 20:29:09.333092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.009 [2024-05-15 20:29:09.333112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:17.009 [2024-05-15 20:29:09.342713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.009 [2024-05-15 20:29:09.343075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.009 [2024-05-15 20:29:09.343095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.351302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.351728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.351748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.360963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.361268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.361288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.370764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.371117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.371138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.380796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.381160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.381180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.390554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.390879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.390900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.399576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.399900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.399920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.409978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.410293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.410320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.418941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.419341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.419361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.427636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.428119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.428140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.437152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.437547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.437568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.445987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.446107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.446127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.454325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.454798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.454818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.463716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.464024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.464044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.473653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.473874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.473893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.483060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.483340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.483361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.491629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.491946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.491966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.010 [2024-05-15 20:29:09.500149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.010 [2024-05-15 20:29:09.500518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.010 [2024-05-15 20:29:09.500538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:17.311 [2024-05-15 20:29:09.510298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.311 [2024-05-15 20:29:09.510513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.311 [2024-05-15 20:29:09.510532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:17.311 [2024-05-15 20:29:09.518243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.311 [2024-05-15 20:29:09.518582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.311 [2024-05-15 20:29:09.518602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:17.311 [2024-05-15 20:29:09.526252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.311 [2024-05-15 20:29:09.526585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.311 [2024-05-15 20:29:09.526608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.311 [2024-05-15 20:29:09.535751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.311 [2024-05-15 20:29:09.536053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.536073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.545192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.545467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.545487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.555496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.555922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.555942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.566095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.566505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.566525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.574080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.574463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.574483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.582663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.583023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.583043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.592014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.592260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.592279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.600478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.600793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.600812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.608831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.609016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.609035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.616496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.616676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.616695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.620977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.621155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.621173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.627672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.627951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.627971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.635421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.635703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.635722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.641705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.641886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.641905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.647489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.647836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.647857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.656248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.656457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.656476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.661311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.661584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.661606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.666922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.667381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.667400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.674165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.674432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.674452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:17.312 [2024-05-15 20:29:09.682616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c3c00) with pdu=0x2000190fef90 00:37:17.312 [2024-05-15 20:29:09.682966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:17.312 [2024-05-15 20:29:09.682986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:17.312 00:37:17.312 Latency(us) 00:37:17.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.312 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:17.312 nvme0n1 : 2.00 3590.76 448.85 0.00 0.00 4446.16 2075.31 15947.09 00:37:17.312 =================================================================================================================== 00:37:17.312 Total : 3590.76 448.85 0.00 0.00 4446.16 2075.31 15947.09 00:37:17.312 0 00:37:17.312 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:17.312 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:17.313 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:17.313 | .driver_specific 00:37:17.313 | .nvme_error 00:37:17.313 | .status_code 00:37:17.313 | .command_transient_transport_error' 00:37:17.313 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:17.618 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 232 > 0 )) 00:37:17.618 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 311900 00:37:17.618 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 311900 ']' 00:37:17.618 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 311900 00:37:17.618 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:37:17.618 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:17.618 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 311900 00:37:17.618 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:17.618 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:17.618 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 311900' 00:37:17.618 killing process with pid 311900 00:37:17.618 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 311900 00:37:17.618 Received shutdown signal, test time was about 2.000000 seconds 00:37:17.618 00:37:17.618 Latency(us) 00:37:17.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.618 =================================================================================================================== 00:37:17.618 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:17.618 20:29:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 311900 00:37:17.618 20:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 309819 00:37:17.618 20:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 309819 ']' 00:37:17.618 20:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 309819 00:37:17.618 20:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:37:17.618 20:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:17.618 20:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 309819 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 309819' 00:37:17.879 killing process with pid 309819 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 309819 00:37:17.879 [2024-05-15 20:29:10.160456] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 309819 00:37:17.879 00:37:17.879 real 0m14.581s 00:37:17.879 user 0m28.631s 00:37:17.879 sys 0m3.344s 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:17.879 ************************************ 00:37:17.879 END TEST nvmf_digest_error 00:37:17.879 ************************************ 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:17.879 rmmod nvme_tcp 00:37:17.879 rmmod nvme_fabrics 00:37:17.879 rmmod nvme_keyring 00:37:17.879 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:18.140 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:37:18.140 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:37:18.140 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 309819 ']' 00:37:18.140 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 309819 00:37:18.140 20:29:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 309819 ']' 00:37:18.140 20:29:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 309819 00:37:18.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (309819) - No such process 00:37:18.140 20:29:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 309819 is not found' 00:37:18.140 Process with pid 309819 is not found 00:37:18.140 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:18.140 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:18.140 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:18.140 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:18.140 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:18.140 20:29:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.140 20:29:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:18.140 20:29:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.052 20:29:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:20.052 00:37:20.052 real 0m39.750s 00:37:20.052 user 0m59.543s 00:37:20.052 sys 0m12.846s 00:37:20.052 20:29:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:20.052 20:29:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:20.052 ************************************ 00:37:20.052 END TEST nvmf_digest 00:37:20.052 ************************************ 00:37:20.052 20:29:12 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:37:20.052 20:29:12 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:37:20.052 20:29:12 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:37:20.052 20:29:12 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:20.052 20:29:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:37:20.052 20:29:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:20.052 20:29:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:20.313 ************************************ 00:37:20.313 START TEST nvmf_bdevperf 00:37:20.313 ************************************ 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:20.313 * Looking for test storage... 00:37:20.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:37:20.313 20:29:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:28.449 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:28.450 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:28.450 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:28.450 Found net devices under 0000:31:00.0: cvl_0_0 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:28.450 Found net devices under 0000:31:00.1: cvl_0_1 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:28.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:28.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:37:28.450 00:37:28.450 --- 10.0.0.2 ping statistics --- 00:37:28.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:28.450 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:28.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:28.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:37:28.450 00:37:28.450 --- 10.0.0.1 ping statistics --- 00:37:28.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:28.450 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=317246 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 317246 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 317246 ']' 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:28.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:28.450 20:29:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:28.450 [2024-05-15 20:29:20.836367] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:37:28.450 [2024-05-15 20:29:20.836417] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:28.450 EAL: No free 2048 kB hugepages reported on node 1 00:37:28.450 [2024-05-15 20:29:20.905355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:28.721 [2024-05-15 20:29:20.972103] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:28.721 [2024-05-15 20:29:20.972138] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:28.721 [2024-05-15 20:29:20.972146] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:28.721 [2024-05-15 20:29:20.972153] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:28.721 [2024-05-15 20:29:20.972158] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:28.721 [2024-05-15 20:29:20.972263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:28.721 [2024-05-15 20:29:20.972423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:28.721 [2024-05-15 20:29:20.972424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:28.721 [2024-05-15 20:29:21.098035] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:28.721 Malloc0 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:28.721 [2024-05-15 20:29:21.160439] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:28.721 [2024-05-15 20:29:21.160672] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:28.721 { 00:37:28.721 "params": { 00:37:28.721 "name": "Nvme$subsystem", 00:37:28.721 "trtype": "$TEST_TRANSPORT", 00:37:28.721 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:28.721 "adrfam": "ipv4", 00:37:28.721 "trsvcid": "$NVMF_PORT", 00:37:28.721 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:28.721 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:28.721 "hdgst": ${hdgst:-false}, 00:37:28.721 "ddgst": ${ddgst:-false} 00:37:28.721 }, 00:37:28.721 "method": "bdev_nvme_attach_controller" 00:37:28.721 } 00:37:28.721 EOF 00:37:28.721 )") 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:37:28.721 20:29:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:28.721 "params": { 00:37:28.721 "name": "Nvme1", 00:37:28.721 "trtype": "tcp", 00:37:28.721 "traddr": "10.0.0.2", 00:37:28.721 "adrfam": "ipv4", 00:37:28.721 "trsvcid": "4420", 00:37:28.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:28.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:28.721 "hdgst": false, 00:37:28.721 "ddgst": false 00:37:28.721 }, 00:37:28.721 "method": "bdev_nvme_attach_controller" 00:37:28.721 }' 00:37:28.721 [2024-05-15 20:29:21.213059] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:37:28.721 [2024-05-15 20:29:21.213107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317271 ] 00:37:28.982 EAL: No free 2048 kB hugepages reported on node 1 00:37:28.982 [2024-05-15 20:29:21.293782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.982 [2024-05-15 20:29:21.358104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.242 Running I/O for 1 seconds... 00:37:30.197 00:37:30.197 Latency(us) 00:37:30.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:30.197 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:30.197 Verification LBA range: start 0x0 length 0x4000 00:37:30.197 Nvme1n1 : 1.00 9018.19 35.23 0.00 0.00 14130.10 1044.48 14417.92 00:37:30.197 =================================================================================================================== 00:37:30.197 Total : 9018.19 35.23 0.00 0.00 14130.10 1044.48 14417.92 00:37:30.197 20:29:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=317603 00:37:30.197 20:29:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:30.197 20:29:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:30.197 20:29:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:30.197 20:29:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:37:30.197 20:29:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:37:30.197 20:29:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:30.197 20:29:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:30.197 { 00:37:30.197 "params": { 00:37:30.197 "name": "Nvme$subsystem", 00:37:30.197 "trtype": "$TEST_TRANSPORT", 00:37:30.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:30.197 "adrfam": "ipv4", 00:37:30.197 "trsvcid": "$NVMF_PORT", 00:37:30.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:30.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:30.197 "hdgst": ${hdgst:-false}, 00:37:30.197 "ddgst": ${ddgst:-false} 00:37:30.197 }, 00:37:30.197 "method": "bdev_nvme_attach_controller" 00:37:30.197 } 00:37:30.197 EOF 00:37:30.197 )") 00:37:30.197 20:29:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:37:30.197 20:29:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:37:30.197 20:29:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:37:30.197 20:29:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:30.197 "params": { 00:37:30.197 "name": "Nvme1", 00:37:30.197 "trtype": "tcp", 00:37:30.197 "traddr": "10.0.0.2", 00:37:30.197 "adrfam": "ipv4", 00:37:30.197 "trsvcid": "4420", 00:37:30.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:30.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:30.197 "hdgst": false, 00:37:30.197 "ddgst": false 00:37:30.197 }, 00:37:30.197 "method": "bdev_nvme_attach_controller" 00:37:30.197 }' 00:37:30.457 [2024-05-15 20:29:22.728372] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:37:30.457 [2024-05-15 20:29:22.728426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317603 ] 00:37:30.457 EAL: No free 2048 kB hugepages reported on node 1 00:37:30.457 [2024-05-15 20:29:22.813243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.457 [2024-05-15 20:29:22.876398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:30.747 Running I/O for 15 seconds... 00:37:33.294 20:29:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 317246 00:37:33.294 20:29:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:33.294 [2024-05-15 20:29:25.697756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.294 [2024-05-15 20:29:25.697800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.697819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.294 [2024-05-15 20:29:25.697831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.697842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.294 [2024-05-15 20:29:25.697850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.697859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.294 [2024-05-15 20:29:25.697867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.697879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.294 [2024-05-15 20:29:25.697888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.697899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.294 [2024-05-15 20:29:25.697910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.697923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.294 [2024-05-15 20:29:25.697934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.697947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.697958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.697967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.697975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.697985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.697992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.698009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.698028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.698053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.698074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.698094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.698115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.698137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.698159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.698179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.698200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.698217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.698233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.698249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.294 [2024-05-15 20:29:25.698265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.294 [2024-05-15 20:29:25.698283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.294 [2024-05-15 20:29:25.698300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.294 [2024-05-15 20:29:25.698321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.294 [2024-05-15 20:29:25.698338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.294 [2024-05-15 20:29:25.698354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.294 [2024-05-15 20:29:25.698363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.294 [2024-05-15 20:29:25.698371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.295 [2024-05-15 20:29:25.698387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.295 [2024-05-15 20:29:25.698694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.295 [2024-05-15 20:29:25.698712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.295 [2024-05-15 20:29:25.698728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.295 [2024-05-15 20:29:25.698744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.295 [2024-05-15 20:29:25.698760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.295 [2024-05-15 20:29:25.698776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.295 [2024-05-15 20:29:25.698792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.295 [2024-05-15 20:29:25.698808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.295 [2024-05-15 20:29:25.698904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.295 [2024-05-15 20:29:25.698913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.296 [2024-05-15 20:29:25.698922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.698931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:33.296 [2024-05-15 20:29:25.698938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.698948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.698955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.698964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.698971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.698980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.698987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.698996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.296 [2024-05-15 20:29:25.699491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.296 [2024-05-15 20:29:25.699501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.297 [2024-05-15 20:29:25.699950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.699958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d19a0 is same with the state(5) to be set 00:37:33.297 [2024-05-15 20:29:25.699967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:33.297 [2024-05-15 20:29:25.699972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:33.297 [2024-05-15 20:29:25.699979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59376 len:8 PRP1 0x0 PRP2 0x0 00:37:33.297 [2024-05-15 20:29:25.699986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.700024] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23d19a0 was disconnected and freed. reset controller. 00:37:33.297 [2024-05-15 20:29:25.700068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:33.297 [2024-05-15 20:29:25.700078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.700087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:33.297 [2024-05-15 20:29:25.700094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.700102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:33.297 [2024-05-15 20:29:25.700109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.700117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:33.297 [2024-05-15 20:29:25.700123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:33.297 [2024-05-15 20:29:25.700131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.297 [2024-05-15 20:29:25.703687] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.297 [2024-05-15 20:29:25.703708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.297 [2024-05-15 20:29:25.704639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.297 [2024-05-15 20:29:25.705072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.298 [2024-05-15 20:29:25.705085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.298 [2024-05-15 20:29:25.705094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.298 [2024-05-15 20:29:25.705348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.298 [2024-05-15 20:29:25.705581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.298 [2024-05-15 20:29:25.705591] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.298 [2024-05-15 20:29:25.705604] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.298 [2024-05-15 20:29:25.709204] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.298 [2024-05-15 20:29:25.717899] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.298 [2024-05-15 20:29:25.718614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.298 [2024-05-15 20:29:25.719003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.298 [2024-05-15 20:29:25.719017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.298 [2024-05-15 20:29:25.719027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.298 [2024-05-15 20:29:25.719269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.298 [2024-05-15 20:29:25.719504] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.298 [2024-05-15 20:29:25.719513] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.298 [2024-05-15 20:29:25.719521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.298 [2024-05-15 20:29:25.723123] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.298 [2024-05-15 20:29:25.731827] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.298 [2024-05-15 20:29:25.732565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.298 [2024-05-15 20:29:25.732990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.298 [2024-05-15 20:29:25.733004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.298 [2024-05-15 20:29:25.733013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.298 [2024-05-15 20:29:25.733256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.298 [2024-05-15 20:29:25.733502] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.298 [2024-05-15 20:29:25.733511] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.298 [2024-05-15 20:29:25.733519] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.298 [2024-05-15 20:29:25.737118] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.298 [2024-05-15 20:29:25.745822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.298 [2024-05-15 20:29:25.746581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.298 [2024-05-15 20:29:25.746973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.298 [2024-05-15 20:29:25.746986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.298 [2024-05-15 20:29:25.746996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.298 [2024-05-15 20:29:25.747238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.298 [2024-05-15 20:29:25.747472] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.298 [2024-05-15 20:29:25.747481] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.298 [2024-05-15 20:29:25.747489] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.298 [2024-05-15 20:29:25.751088] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.298 [2024-05-15 20:29:25.759788] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.298 [2024-05-15 20:29:25.760520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.298 [2024-05-15 20:29:25.760932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.298 [2024-05-15 20:29:25.760946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.298 [2024-05-15 20:29:25.760955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.298 [2024-05-15 20:29:25.761198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.298 [2024-05-15 20:29:25.761430] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.298 [2024-05-15 20:29:25.761440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.298 [2024-05-15 20:29:25.761447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.298 [2024-05-15 20:29:25.765042] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.298 [2024-05-15 20:29:25.773733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.298 [2024-05-15 20:29:25.774473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.298 [2024-05-15 20:29:25.774895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.298 [2024-05-15 20:29:25.774908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.298 [2024-05-15 20:29:25.774918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.298 [2024-05-15 20:29:25.775161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.298 [2024-05-15 20:29:25.775397] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.298 [2024-05-15 20:29:25.775407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.298 [2024-05-15 20:29:25.775414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.298 [2024-05-15 20:29:25.779008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.298 [2024-05-15 20:29:25.787694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.298 [2024-05-15 20:29:25.788472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.298 [2024-05-15 20:29:25.788862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.298 [2024-05-15 20:29:25.788876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.298 [2024-05-15 20:29:25.788885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.298 [2024-05-15 20:29:25.789129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.298 [2024-05-15 20:29:25.789367] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.298 [2024-05-15 20:29:25.789376] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.298 [2024-05-15 20:29:25.789383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.561 [2024-05-15 20:29:25.792987] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.561 [2024-05-15 20:29:25.801685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.561 [2024-05-15 20:29:25.802405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-05-15 20:29:25.802853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-05-15 20:29:25.802867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.561 [2024-05-15 20:29:25.802877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.561 [2024-05-15 20:29:25.803122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.561 [2024-05-15 20:29:25.803359] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.561 [2024-05-15 20:29:25.803368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.561 [2024-05-15 20:29:25.803375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.561 [2024-05-15 20:29:25.806973] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.561 [2024-05-15 20:29:25.815664] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.561 [2024-05-15 20:29:25.816352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-05-15 20:29:25.816741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-05-15 20:29:25.816754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.561 [2024-05-15 20:29:25.816764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.561 [2024-05-15 20:29:25.817010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.561 [2024-05-15 20:29:25.817235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.561 [2024-05-15 20:29:25.817244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.561 [2024-05-15 20:29:25.817252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.561 [2024-05-15 20:29:25.820863] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.561 [2024-05-15 20:29:25.829547] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.561 [2024-05-15 20:29:25.830209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-05-15 20:29:25.830471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-05-15 20:29:25.830483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.561 [2024-05-15 20:29:25.830491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.561 [2024-05-15 20:29:25.830715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.561 [2024-05-15 20:29:25.830937] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.561 [2024-05-15 20:29:25.830945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.561 [2024-05-15 20:29:25.830952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.561 [2024-05-15 20:29:25.834569] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.561 [2024-05-15 20:29:25.843494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.561 [2024-05-15 20:29:25.844055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-05-15 20:29:25.844490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-05-15 20:29:25.844507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.561 [2024-05-15 20:29:25.844517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.561 [2024-05-15 20:29:25.844766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.561 [2024-05-15 20:29:25.844995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.561 [2024-05-15 20:29:25.845004] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.561 [2024-05-15 20:29:25.845013] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.561 [2024-05-15 20:29:25.848618] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.561 [2024-05-15 20:29:25.857524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.561 [2024-05-15 20:29:25.858144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.561 [2024-05-15 20:29:25.858525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.858536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.562 [2024-05-15 20:29:25.858544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.562 [2024-05-15 20:29:25.858769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.562 [2024-05-15 20:29:25.858991] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.562 [2024-05-15 20:29:25.858999] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.562 [2024-05-15 20:29:25.859006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.562 [2024-05-15 20:29:25.862606] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.562 [2024-05-15 20:29:25.871509] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.562 [2024-05-15 20:29:25.872233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.872731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.872746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.562 [2024-05-15 20:29:25.872757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.562 [2024-05-15 20:29:25.873011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.562 [2024-05-15 20:29:25.873238] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.562 [2024-05-15 20:29:25.873246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.562 [2024-05-15 20:29:25.873254] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.562 [2024-05-15 20:29:25.876875] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.562 [2024-05-15 20:29:25.885381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.562 [2024-05-15 20:29:25.886119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.886592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.886610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.562 [2024-05-15 20:29:25.886628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.562 [2024-05-15 20:29:25.886887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.562 [2024-05-15 20:29:25.887114] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.562 [2024-05-15 20:29:25.887123] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.562 [2024-05-15 20:29:25.887131] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.562 [2024-05-15 20:29:25.890746] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.562 [2024-05-15 20:29:25.899237] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.562 [2024-05-15 20:29:25.900014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.900473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.900488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.562 [2024-05-15 20:29:25.900500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.562 [2024-05-15 20:29:25.900757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.562 [2024-05-15 20:29:25.900986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.562 [2024-05-15 20:29:25.900994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.562 [2024-05-15 20:29:25.901003] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.562 [2024-05-15 20:29:25.904626] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.562 [2024-05-15 20:29:25.913116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.562 [2024-05-15 20:29:25.913864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.914246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.914260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.562 [2024-05-15 20:29:25.914272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.562 [2024-05-15 20:29:25.914544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.562 [2024-05-15 20:29:25.914773] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.562 [2024-05-15 20:29:25.914782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.562 [2024-05-15 20:29:25.914790] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.562 [2024-05-15 20:29:25.918399] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.562 [2024-05-15 20:29:25.927102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.562 [2024-05-15 20:29:25.927858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.928328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.928344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.562 [2024-05-15 20:29:25.928361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.562 [2024-05-15 20:29:25.928619] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.562 [2024-05-15 20:29:25.928847] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.562 [2024-05-15 20:29:25.928856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.562 [2024-05-15 20:29:25.928864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.562 [2024-05-15 20:29:25.932486] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.562 [2024-05-15 20:29:25.940995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.562 [2024-05-15 20:29:25.941721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.942182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.942196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.562 [2024-05-15 20:29:25.942207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.562 [2024-05-15 20:29:25.942478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.562 [2024-05-15 20:29:25.942708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.562 [2024-05-15 20:29:25.942716] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.562 [2024-05-15 20:29:25.942724] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.562 [2024-05-15 20:29:25.946343] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.562 [2024-05-15 20:29:25.955039] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.562 [2024-05-15 20:29:25.955795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.956297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.956312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.562 [2024-05-15 20:29:25.956339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.562 [2024-05-15 20:29:25.956596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.562 [2024-05-15 20:29:25.956824] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.562 [2024-05-15 20:29:25.956833] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.562 [2024-05-15 20:29:25.956841] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.562 [2024-05-15 20:29:25.960466] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.562 [2024-05-15 20:29:25.968992] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.562 [2024-05-15 20:29:25.969724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.970058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.970077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.562 [2024-05-15 20:29:25.970088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.562 [2024-05-15 20:29:25.970364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.562 [2024-05-15 20:29:25.970594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.562 [2024-05-15 20:29:25.970602] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.562 [2024-05-15 20:29:25.970610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.562 [2024-05-15 20:29:25.974220] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.562 [2024-05-15 20:29:25.982923] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.562 [2024-05-15 20:29:25.983715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.984164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.562 [2024-05-15 20:29:25.984178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.562 [2024-05-15 20:29:25.984189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.562 [2024-05-15 20:29:25.984459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.562 [2024-05-15 20:29:25.984690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.562 [2024-05-15 20:29:25.984698] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.562 [2024-05-15 20:29:25.984706] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.563 [2024-05-15 20:29:25.988315] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.563 [2024-05-15 20:29:25.996806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.563 [2024-05-15 20:29:25.997552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-05-15 20:29:25.998007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-05-15 20:29:25.998022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.563 [2024-05-15 20:29:25.998033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.563 [2024-05-15 20:29:25.998289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.563 [2024-05-15 20:29:25.998534] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.563 [2024-05-15 20:29:25.998543] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.563 [2024-05-15 20:29:25.998551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.563 [2024-05-15 20:29:26.002170] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.563 [2024-05-15 20:29:26.010674] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.563 [2024-05-15 20:29:26.011458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-05-15 20:29:26.011923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-05-15 20:29:26.011937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.563 [2024-05-15 20:29:26.011948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.563 [2024-05-15 20:29:26.012205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.563 [2024-05-15 20:29:26.012455] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.563 [2024-05-15 20:29:26.012465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.563 [2024-05-15 20:29:26.012473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.563 [2024-05-15 20:29:26.016088] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.563 [2024-05-15 20:29:26.024581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.563 [2024-05-15 20:29:26.025325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-05-15 20:29:26.025824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-05-15 20:29:26.025838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.563 [2024-05-15 20:29:26.025849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.563 [2024-05-15 20:29:26.026106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.563 [2024-05-15 20:29:26.026344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.563 [2024-05-15 20:29:26.026353] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.563 [2024-05-15 20:29:26.026361] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.563 [2024-05-15 20:29:26.029973] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.563 [2024-05-15 20:29:26.038478] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.563 [2024-05-15 20:29:26.038963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-05-15 20:29:26.039237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-05-15 20:29:26.039254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.563 [2024-05-15 20:29:26.039262] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.563 [2024-05-15 20:29:26.039506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.563 [2024-05-15 20:29:26.039734] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.563 [2024-05-15 20:29:26.039742] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.563 [2024-05-15 20:29:26.039750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.563 [2024-05-15 20:29:26.043359] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.563 [2024-05-15 20:29:26.052469] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.563 [2024-05-15 20:29:26.053188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-05-15 20:29:26.053638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.563 [2024-05-15 20:29:26.053654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.563 [2024-05-15 20:29:26.053665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.563 [2024-05-15 20:29:26.053922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.563 [2024-05-15 20:29:26.054151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.563 [2024-05-15 20:29:26.054167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.563 [2024-05-15 20:29:26.054175] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.563 [2024-05-15 20:29:26.057804] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.825 [2024-05-15 20:29:26.066338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.825 [2024-05-15 20:29:26.067084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.825 [2024-05-15 20:29:26.067549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.825 [2024-05-15 20:29:26.067565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.825 [2024-05-15 20:29:26.067577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.825 [2024-05-15 20:29:26.067833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.825 [2024-05-15 20:29:26.068062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.825 [2024-05-15 20:29:26.068070] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.825 [2024-05-15 20:29:26.068078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.825 [2024-05-15 20:29:26.071708] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.825 [2024-05-15 20:29:26.080213] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.825 [2024-05-15 20:29:26.080953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.825 [2024-05-15 20:29:26.081408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.825 [2024-05-15 20:29:26.081423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.825 [2024-05-15 20:29:26.081434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.825 [2024-05-15 20:29:26.081690] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.825 [2024-05-15 20:29:26.081919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.825 [2024-05-15 20:29:26.081927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.825 [2024-05-15 20:29:26.081935] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.085560] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.094270] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.095010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.095476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.095492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.095503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.095760] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.095988] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.095996] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.826 [2024-05-15 20:29:26.096011] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.099634] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.108125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.108899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.109357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.109373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.109384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.109641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.109870] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.109879] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.826 [2024-05-15 20:29:26.109887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.113524] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.122026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.122669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.122974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.122984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.122993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.123219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.123453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.123462] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.826 [2024-05-15 20:29:26.123469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.127077] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.136004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.136754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.137215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.137230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.137241] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.137512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.137742] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.137750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.826 [2024-05-15 20:29:26.137758] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.141376] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.149877] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.150661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.151131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.151146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.151157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.151428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.151656] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.151665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.826 [2024-05-15 20:29:26.151673] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.155282] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.163768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.164511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.164973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.164988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.164998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.165255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.165496] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.165506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.826 [2024-05-15 20:29:26.165514] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.169133] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.177633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.178417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.178868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.178882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.178893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.179150] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.179392] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.179402] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.826 [2024-05-15 20:29:26.179410] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.183026] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.191513] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.192173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.192467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.192480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.192488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.192714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.192937] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.192945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.826 [2024-05-15 20:29:26.192953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.196557] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.205462] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.206063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.206525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.206543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.206554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.206812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.207040] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.207049] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.826 [2024-05-15 20:29:26.207057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.210690] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.219441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.220087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.220554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.220571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.220582] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.220840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.221068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.221077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.826 [2024-05-15 20:29:26.221085] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.224707] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.233429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.234070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.234466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.234478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.234486] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.234712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.234935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.234944] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.826 [2024-05-15 20:29:26.234951] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.238555] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.247460] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.248082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.248500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.248511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.248519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.248744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.248968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.248976] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.826 [2024-05-15 20:29:26.248983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.252586] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.261511] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.262138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.262526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.262537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.262545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.262770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.262993] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.263001] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.826 [2024-05-15 20:29:26.263008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.266633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.275574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.276199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.276602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.276619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.276627] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.276852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.277076] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.277083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.826 [2024-05-15 20:29:26.277091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.280713] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.289451] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.290086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.290499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.290510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.290518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.290741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.290964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.290972] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.826 [2024-05-15 20:29:26.290979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.826 [2024-05-15 20:29:26.294598] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.826 [2024-05-15 20:29:26.303324] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.826 [2024-05-15 20:29:26.303900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.304194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.826 [2024-05-15 20:29:26.304206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.826 [2024-05-15 20:29:26.304213] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.826 [2024-05-15 20:29:26.304445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.826 [2024-05-15 20:29:26.304671] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.826 [2024-05-15 20:29:26.304679] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.827 [2024-05-15 20:29:26.304686] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.827 [2024-05-15 20:29:26.308296] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:33.827 [2024-05-15 20:29:26.317235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:33.827 [2024-05-15 20:29:26.317811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.827 [2024-05-15 20:29:26.318181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:33.827 [2024-05-15 20:29:26.318191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:33.827 [2024-05-15 20:29:26.318205] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:33.827 [2024-05-15 20:29:26.318438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:33.827 [2024-05-15 20:29:26.318661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:33.827 [2024-05-15 20:29:26.318670] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:33.827 [2024-05-15 20:29:26.318681] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:33.827 [2024-05-15 20:29:26.322307] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.089 [2024-05-15 20:29:26.331251] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.089 [2024-05-15 20:29:26.331987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.089 [2024-05-15 20:29:26.332446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.089 [2024-05-15 20:29:26.332462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.089 [2024-05-15 20:29:26.332473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.089 [2024-05-15 20:29:26.332730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.089 [2024-05-15 20:29:26.332958] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.089 [2024-05-15 20:29:26.332966] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.089 [2024-05-15 20:29:26.332975] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.089 [2024-05-15 20:29:26.336615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.089 [2024-05-15 20:29:26.345110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.089 [2024-05-15 20:29:26.345872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.089 [2024-05-15 20:29:26.346388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.089 [2024-05-15 20:29:26.346405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.089 [2024-05-15 20:29:26.346416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.089 [2024-05-15 20:29:26.346674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.089 [2024-05-15 20:29:26.346902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.089 [2024-05-15 20:29:26.346911] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.089 [2024-05-15 20:29:26.346919] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.089 [2024-05-15 20:29:26.350533] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.089 [2024-05-15 20:29:26.359030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.089 [2024-05-15 20:29:26.359733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.089 [2024-05-15 20:29:26.360226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.089 [2024-05-15 20:29:26.360241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.089 [2024-05-15 20:29:26.360252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.089 [2024-05-15 20:29:26.360530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.089 [2024-05-15 20:29:26.360759] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.089 [2024-05-15 20:29:26.360768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.089 [2024-05-15 20:29:26.360776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.089 [2024-05-15 20:29:26.364393] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.089 [2024-05-15 20:29:26.372884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.089 [2024-05-15 20:29:26.373608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.089 [2024-05-15 20:29:26.374065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.089 [2024-05-15 20:29:26.374079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.089 [2024-05-15 20:29:26.374090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.089 [2024-05-15 20:29:26.374360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.089 [2024-05-15 20:29:26.374589] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.089 [2024-05-15 20:29:26.374598] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.089 [2024-05-15 20:29:26.374606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.089 [2024-05-15 20:29:26.378218] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.089 [2024-05-15 20:29:26.386918] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.089 [2024-05-15 20:29:26.387612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.089 [2024-05-15 20:29:26.388072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.089 [2024-05-15 20:29:26.388087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.089 [2024-05-15 20:29:26.388098] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.090 [2024-05-15 20:29:26.388366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.090 [2024-05-15 20:29:26.388595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.090 [2024-05-15 20:29:26.388603] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.090 [2024-05-15 20:29:26.388612] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.090 [2024-05-15 20:29:26.392223] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.090 [2024-05-15 20:29:26.400925] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.090 [2024-05-15 20:29:26.401699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.401996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.402010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.090 [2024-05-15 20:29:26.402022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.090 [2024-05-15 20:29:26.402280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.090 [2024-05-15 20:29:26.402531] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.090 [2024-05-15 20:29:26.402540] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.090 [2024-05-15 20:29:26.402548] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.090 [2024-05-15 20:29:26.406162] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.090 [2024-05-15 20:29:26.414866] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.090 [2024-05-15 20:29:26.415648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.416033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.416047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.090 [2024-05-15 20:29:26.416058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.090 [2024-05-15 20:29:26.416328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.090 [2024-05-15 20:29:26.416558] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.090 [2024-05-15 20:29:26.416567] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.090 [2024-05-15 20:29:26.416576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.090 [2024-05-15 20:29:26.420187] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.090 [2024-05-15 20:29:26.428901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.090 [2024-05-15 20:29:26.429676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.430136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.430150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.090 [2024-05-15 20:29:26.430161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.090 [2024-05-15 20:29:26.430431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.090 [2024-05-15 20:29:26.430661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.090 [2024-05-15 20:29:26.430669] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.090 [2024-05-15 20:29:26.430677] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.090 [2024-05-15 20:29:26.434293] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.090 [2024-05-15 20:29:26.442804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.090 [2024-05-15 20:29:26.443584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.444044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.444058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.090 [2024-05-15 20:29:26.444069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.090 [2024-05-15 20:29:26.444340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.090 [2024-05-15 20:29:26.444569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.090 [2024-05-15 20:29:26.444585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.090 [2024-05-15 20:29:26.444593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.090 [2024-05-15 20:29:26.448210] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.090 [2024-05-15 20:29:26.456795] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.090 [2024-05-15 20:29:26.457592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.458051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.458065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.090 [2024-05-15 20:29:26.458076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.090 [2024-05-15 20:29:26.458345] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.090 [2024-05-15 20:29:26.458574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.090 [2024-05-15 20:29:26.458584] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.090 [2024-05-15 20:29:26.458592] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.090 [2024-05-15 20:29:26.462207] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.090 [2024-05-15 20:29:26.470705] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.090 [2024-05-15 20:29:26.471356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.471722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.471732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.090 [2024-05-15 20:29:26.471740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.090 [2024-05-15 20:29:26.471968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.090 [2024-05-15 20:29:26.472191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.090 [2024-05-15 20:29:26.472201] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.090 [2024-05-15 20:29:26.472208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.090 [2024-05-15 20:29:26.475824] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.090 [2024-05-15 20:29:26.484732] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.090 [2024-05-15 20:29:26.485448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.485889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.485904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.090 [2024-05-15 20:29:26.485915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.090 [2024-05-15 20:29:26.486172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.090 [2024-05-15 20:29:26.486414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.090 [2024-05-15 20:29:26.486424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.090 [2024-05-15 20:29:26.486439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.090 [2024-05-15 20:29:26.490056] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.090 [2024-05-15 20:29:26.498781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.090 [2024-05-15 20:29:26.499443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.499888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.499902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.090 [2024-05-15 20:29:26.499913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.090 [2024-05-15 20:29:26.500169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.090 [2024-05-15 20:29:26.500412] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.090 [2024-05-15 20:29:26.500422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.090 [2024-05-15 20:29:26.500430] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.090 [2024-05-15 20:29:26.504041] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.090 [2024-05-15 20:29:26.512741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.090 [2024-05-15 20:29:26.513433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.513898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.090 [2024-05-15 20:29:26.513914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.090 [2024-05-15 20:29:26.513925] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.090 [2024-05-15 20:29:26.514181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.090 [2024-05-15 20:29:26.514423] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.090 [2024-05-15 20:29:26.514433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.090 [2024-05-15 20:29:26.514441] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.090 [2024-05-15 20:29:26.518057] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.090 [2024-05-15 20:29:26.526763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.090 [2024-05-15 20:29:26.527428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.091 [2024-05-15 20:29:26.527874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.091 [2024-05-15 20:29:26.527890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.091 [2024-05-15 20:29:26.527901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.091 [2024-05-15 20:29:26.528158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.091 [2024-05-15 20:29:26.528405] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.091 [2024-05-15 20:29:26.528415] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.091 [2024-05-15 20:29:26.528423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.091 [2024-05-15 20:29:26.532038] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.091 [2024-05-15 20:29:26.540768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.091 [2024-05-15 20:29:26.541574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.091 [2024-05-15 20:29:26.542029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.091 [2024-05-15 20:29:26.542043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.091 [2024-05-15 20:29:26.542054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.091 [2024-05-15 20:29:26.542311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.091 [2024-05-15 20:29:26.542555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.091 [2024-05-15 20:29:26.542563] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.091 [2024-05-15 20:29:26.542571] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.091 [2024-05-15 20:29:26.546183] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.091 [2024-05-15 20:29:26.554676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.091 [2024-05-15 20:29:26.555428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.091 [2024-05-15 20:29:26.555898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.091 [2024-05-15 20:29:26.555913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.091 [2024-05-15 20:29:26.555924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.091 [2024-05-15 20:29:26.556181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.091 [2024-05-15 20:29:26.556424] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.091 [2024-05-15 20:29:26.556433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.091 [2024-05-15 20:29:26.556441] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.091 [2024-05-15 20:29:26.560053] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.091 [2024-05-15 20:29:26.568544] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.091 [2024-05-15 20:29:26.569361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.091 [2024-05-15 20:29:26.569822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.091 [2024-05-15 20:29:26.569836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.091 [2024-05-15 20:29:26.569848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.091 [2024-05-15 20:29:26.570104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.091 [2024-05-15 20:29:26.570348] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.091 [2024-05-15 20:29:26.570357] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.091 [2024-05-15 20:29:26.570365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.091 [2024-05-15 20:29:26.573978] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.091 [2024-05-15 20:29:26.582473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.091 [2024-05-15 20:29:26.583167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.091 [2024-05-15 20:29:26.583569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.091 [2024-05-15 20:29:26.583581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.091 [2024-05-15 20:29:26.583589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.091 [2024-05-15 20:29:26.583815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.091 [2024-05-15 20:29:26.584038] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.091 [2024-05-15 20:29:26.584046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.091 [2024-05-15 20:29:26.584054] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.091 [2024-05-15 20:29:26.587662] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.355 [2024-05-15 20:29:26.596365] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.355 [2024-05-15 20:29:26.597092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.597538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.597554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.355 [2024-05-15 20:29:26.597565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.355 [2024-05-15 20:29:26.597822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.355 [2024-05-15 20:29:26.598051] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.355 [2024-05-15 20:29:26.598060] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.355 [2024-05-15 20:29:26.598068] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.355 [2024-05-15 20:29:26.601691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.355 [2024-05-15 20:29:26.610243] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.355 [2024-05-15 20:29:26.611028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.611483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.611500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.355 [2024-05-15 20:29:26.611511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.355 [2024-05-15 20:29:26.611768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.355 [2024-05-15 20:29:26.611997] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.355 [2024-05-15 20:29:26.612006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.355 [2024-05-15 20:29:26.612014] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.355 [2024-05-15 20:29:26.615633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.355 [2024-05-15 20:29:26.624123] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.355 [2024-05-15 20:29:26.624699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.625170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.625184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.355 [2024-05-15 20:29:26.625195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.355 [2024-05-15 20:29:26.625465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.355 [2024-05-15 20:29:26.625694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.355 [2024-05-15 20:29:26.625702] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.355 [2024-05-15 20:29:26.625710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.355 [2024-05-15 20:29:26.629325] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.355 [2024-05-15 20:29:26.638048] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.355 [2024-05-15 20:29:26.638839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.639255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.639269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.355 [2024-05-15 20:29:26.639280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.355 [2024-05-15 20:29:26.639550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.355 [2024-05-15 20:29:26.639779] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.355 [2024-05-15 20:29:26.639788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.355 [2024-05-15 20:29:26.639796] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.355 [2024-05-15 20:29:26.643409] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.355 [2024-05-15 20:29:26.651901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.355 [2024-05-15 20:29:26.652644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.653101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.653116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.355 [2024-05-15 20:29:26.653126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.355 [2024-05-15 20:29:26.653397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.355 [2024-05-15 20:29:26.653626] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.355 [2024-05-15 20:29:26.653635] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.355 [2024-05-15 20:29:26.653644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.355 [2024-05-15 20:29:26.657258] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.355 [2024-05-15 20:29:26.665960] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.355 [2024-05-15 20:29:26.666734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.667160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.667181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.355 [2024-05-15 20:29:26.667192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.355 [2024-05-15 20:29:26.667463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.355 [2024-05-15 20:29:26.667693] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.355 [2024-05-15 20:29:26.667701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.355 [2024-05-15 20:29:26.667709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.355 [2024-05-15 20:29:26.671326] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.355 [2024-05-15 20:29:26.679812] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.355 [2024-05-15 20:29:26.680584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.681045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.681060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.355 [2024-05-15 20:29:26.681071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.355 [2024-05-15 20:29:26.681341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.355 [2024-05-15 20:29:26.681571] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.355 [2024-05-15 20:29:26.681579] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.355 [2024-05-15 20:29:26.681587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.355 [2024-05-15 20:29:26.685202] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.355 [2024-05-15 20:29:26.693692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.355 [2024-05-15 20:29:26.694411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.694857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.694871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.355 [2024-05-15 20:29:26.694882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.355 [2024-05-15 20:29:26.695139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.355 [2024-05-15 20:29:26.695382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.355 [2024-05-15 20:29:26.695392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.355 [2024-05-15 20:29:26.695400] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.355 [2024-05-15 20:29:26.699012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.355 [2024-05-15 20:29:26.707791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.355 [2024-05-15 20:29:26.708555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.709012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.709028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.355 [2024-05-15 20:29:26.709047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.355 [2024-05-15 20:29:26.709304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.355 [2024-05-15 20:29:26.709552] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.355 [2024-05-15 20:29:26.709562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.355 [2024-05-15 20:29:26.709570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.355 [2024-05-15 20:29:26.713186] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.355 [2024-05-15 20:29:26.721687] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.355 [2024-05-15 20:29:26.722428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.722867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.355 [2024-05-15 20:29:26.722881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.355 [2024-05-15 20:29:26.722892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.356 [2024-05-15 20:29:26.723149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.356 [2024-05-15 20:29:26.723390] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.356 [2024-05-15 20:29:26.723399] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.356 [2024-05-15 20:29:26.723408] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.356 [2024-05-15 20:29:26.727020] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.356 [2024-05-15 20:29:26.735733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.356 [2024-05-15 20:29:26.736469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.736925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.736940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.356 [2024-05-15 20:29:26.736951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.356 [2024-05-15 20:29:26.737208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.356 [2024-05-15 20:29:26.737464] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.356 [2024-05-15 20:29:26.737475] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.356 [2024-05-15 20:29:26.737483] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.356 [2024-05-15 20:29:26.741100] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.356 [2024-05-15 20:29:26.749613] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.356 [2024-05-15 20:29:26.750295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.750685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.750697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.356 [2024-05-15 20:29:26.750706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.356 [2024-05-15 20:29:26.750939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.356 [2024-05-15 20:29:26.751163] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.356 [2024-05-15 20:29:26.751171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.356 [2024-05-15 20:29:26.751178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.356 [2024-05-15 20:29:26.754785] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.356 [2024-05-15 20:29:26.763625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.356 [2024-05-15 20:29:26.764273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.764668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.764680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.356 [2024-05-15 20:29:26.764689] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.356 [2024-05-15 20:29:26.764916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.356 [2024-05-15 20:29:26.765140] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.356 [2024-05-15 20:29:26.765149] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.356 [2024-05-15 20:29:26.765156] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.356 [2024-05-15 20:29:26.768773] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.356 [2024-05-15 20:29:26.777692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.356 [2024-05-15 20:29:26.778328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.778717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.778729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.356 [2024-05-15 20:29:26.778738] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.356 [2024-05-15 20:29:26.778962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.356 [2024-05-15 20:29:26.779185] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.356 [2024-05-15 20:29:26.779193] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.356 [2024-05-15 20:29:26.779201] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.356 [2024-05-15 20:29:26.782812] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.356 [2024-05-15 20:29:26.791738] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.356 [2024-05-15 20:29:26.792380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.792775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.792785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.356 [2024-05-15 20:29:26.792794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.356 [2024-05-15 20:29:26.793018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.356 [2024-05-15 20:29:26.793249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.356 [2024-05-15 20:29:26.793259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.356 [2024-05-15 20:29:26.793266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.356 [2024-05-15 20:29:26.796881] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.356 [2024-05-15 20:29:26.805592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.356 [2024-05-15 20:29:26.806329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.806801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.806816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.356 [2024-05-15 20:29:26.806827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.356 [2024-05-15 20:29:26.807084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.356 [2024-05-15 20:29:26.807312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.356 [2024-05-15 20:29:26.807329] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.356 [2024-05-15 20:29:26.807340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.356 [2024-05-15 20:29:26.810964] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.356 [2024-05-15 20:29:26.819473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.356 [2024-05-15 20:29:26.820209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.821108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.821133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.356 [2024-05-15 20:29:26.821146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.356 [2024-05-15 20:29:26.821418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.356 [2024-05-15 20:29:26.821649] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.356 [2024-05-15 20:29:26.821659] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.356 [2024-05-15 20:29:26.821667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.356 [2024-05-15 20:29:26.825283] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.356 [2024-05-15 20:29:26.833363] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.356 [2024-05-15 20:29:26.834014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.834404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.834416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.356 [2024-05-15 20:29:26.834424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.356 [2024-05-15 20:29:26.834649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.356 [2024-05-15 20:29:26.834872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.356 [2024-05-15 20:29:26.834888] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.356 [2024-05-15 20:29:26.834895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.356 [2024-05-15 20:29:26.838532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.356 [2024-05-15 20:29:26.847306] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.356 [2024-05-15 20:29:26.848053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.848517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.356 [2024-05-15 20:29:26.848533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.356 [2024-05-15 20:29:26.848546] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.356 [2024-05-15 20:29:26.848803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.356 [2024-05-15 20:29:26.849031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.356 [2024-05-15 20:29:26.849040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.356 [2024-05-15 20:29:26.849048] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.356 [2024-05-15 20:29:26.852666] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.619 [2024-05-15 20:29:26.861167] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.619 [2024-05-15 20:29:26.861852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.619 [2024-05-15 20:29:26.862244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.619 [2024-05-15 20:29:26.862257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.619 [2024-05-15 20:29:26.862265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.619 [2024-05-15 20:29:26.862496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.619 [2024-05-15 20:29:26.862719] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.619 [2024-05-15 20:29:26.862729] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.619 [2024-05-15 20:29:26.862737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.619 [2024-05-15 20:29:26.866358] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.619 [2024-05-15 20:29:26.875065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.619 [2024-05-15 20:29:26.875679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.619 [2024-05-15 20:29:26.876085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.619 [2024-05-15 20:29:26.876095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.620 [2024-05-15 20:29:26.876105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.620 [2024-05-15 20:29:26.876338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.620 [2024-05-15 20:29:26.876564] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.620 [2024-05-15 20:29:26.876572] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.620 [2024-05-15 20:29:26.876595] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.620 [2024-05-15 20:29:26.880199] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.620 [2024-05-15 20:29:26.889122] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.620 [2024-05-15 20:29:26.889803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.890092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.890107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.620 [2024-05-15 20:29:26.890115] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.620 [2024-05-15 20:29:26.890354] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.620 [2024-05-15 20:29:26.890581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.620 [2024-05-15 20:29:26.890589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.620 [2024-05-15 20:29:26.890596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.620 [2024-05-15 20:29:26.894206] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.620 [2024-05-15 20:29:26.903125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.620 [2024-05-15 20:29:26.903808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.904200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.904211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.620 [2024-05-15 20:29:26.904219] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.620 [2024-05-15 20:29:26.904451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.620 [2024-05-15 20:29:26.904687] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.620 [2024-05-15 20:29:26.904696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.620 [2024-05-15 20:29:26.904703] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.620 [2024-05-15 20:29:26.908311] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.620 [2024-05-15 20:29:26.917029] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.620 [2024-05-15 20:29:26.917546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.917940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.917950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.620 [2024-05-15 20:29:26.917959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.620 [2024-05-15 20:29:26.918184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.620 [2024-05-15 20:29:26.918411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.620 [2024-05-15 20:29:26.918420] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.620 [2024-05-15 20:29:26.918427] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.620 [2024-05-15 20:29:26.922050] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.620 [2024-05-15 20:29:26.930965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.620 [2024-05-15 20:29:26.931709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.932017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.932032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.620 [2024-05-15 20:29:26.932042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.620 [2024-05-15 20:29:26.932296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.620 [2024-05-15 20:29:26.932533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.620 [2024-05-15 20:29:26.932543] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.620 [2024-05-15 20:29:26.932551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.620 [2024-05-15 20:29:26.936162] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.620 [2024-05-15 20:29:26.944902] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.620 [2024-05-15 20:29:26.946856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.947308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.947335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.620 [2024-05-15 20:29:26.947346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.620 [2024-05-15 20:29:26.947600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.620 [2024-05-15 20:29:26.947828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.620 [2024-05-15 20:29:26.947837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.620 [2024-05-15 20:29:26.947844] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.620 [2024-05-15 20:29:26.951467] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.620 [2024-05-15 20:29:26.958903] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.620 [2024-05-15 20:29:26.959589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.960361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.960382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.620 [2024-05-15 20:29:26.960391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.620 [2024-05-15 20:29:26.960633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.620 [2024-05-15 20:29:26.960859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.620 [2024-05-15 20:29:26.960869] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.620 [2024-05-15 20:29:26.960877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.620 [2024-05-15 20:29:26.964500] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.620 [2024-05-15 20:29:26.972791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.620 [2024-05-15 20:29:26.973436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.973910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.973924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.620 [2024-05-15 20:29:26.973935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.620 [2024-05-15 20:29:26.974191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.620 [2024-05-15 20:29:26.974430] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.620 [2024-05-15 20:29:26.974439] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.620 [2024-05-15 20:29:26.974447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.620 [2024-05-15 20:29:26.978062] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.620 [2024-05-15 20:29:26.986781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.620 [2024-05-15 20:29:26.987452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.987846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:26.987857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.620 [2024-05-15 20:29:26.987865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.620 [2024-05-15 20:29:26.988089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.620 [2024-05-15 20:29:26.988319] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.620 [2024-05-15 20:29:26.988328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.620 [2024-05-15 20:29:26.988336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.620 [2024-05-15 20:29:26.991946] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.620 [2024-05-15 20:29:27.000650] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.620 [2024-05-15 20:29:27.001278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:27.001663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.620 [2024-05-15 20:29:27.001674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.620 [2024-05-15 20:29:27.001683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.620 [2024-05-15 20:29:27.001907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.620 [2024-05-15 20:29:27.002132] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.620 [2024-05-15 20:29:27.002140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.620 [2024-05-15 20:29:27.002147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.620 [2024-05-15 20:29:27.005760] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.621 [2024-05-15 20:29:27.014682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.621 [2024-05-15 20:29:27.015409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.015866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.015880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.621 [2024-05-15 20:29:27.015891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.621 [2024-05-15 20:29:27.016148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.621 [2024-05-15 20:29:27.016389] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.621 [2024-05-15 20:29:27.016399] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.621 [2024-05-15 20:29:27.016407] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.621 [2024-05-15 20:29:27.020027] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.621 [2024-05-15 20:29:27.028739] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.621 [2024-05-15 20:29:27.029264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.029740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.029753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.621 [2024-05-15 20:29:27.029762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.621 [2024-05-15 20:29:27.029989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.621 [2024-05-15 20:29:27.030213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.621 [2024-05-15 20:29:27.030221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.621 [2024-05-15 20:29:27.030228] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.621 [2024-05-15 20:29:27.033840] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.621 [2024-05-15 20:29:27.042787] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.621 [2024-05-15 20:29:27.043417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.043810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.043820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.621 [2024-05-15 20:29:27.043828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.621 [2024-05-15 20:29:27.044053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.621 [2024-05-15 20:29:27.044276] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.621 [2024-05-15 20:29:27.044285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.621 [2024-05-15 20:29:27.044292] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.621 [2024-05-15 20:29:27.047907] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.621 [2024-05-15 20:29:27.056832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.621 [2024-05-15 20:29:27.057460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.057850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.057866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.621 [2024-05-15 20:29:27.057874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.621 [2024-05-15 20:29:27.058099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.621 [2024-05-15 20:29:27.058327] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.621 [2024-05-15 20:29:27.058337] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.621 [2024-05-15 20:29:27.058344] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.621 [2024-05-15 20:29:27.061948] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.621 [2024-05-15 20:29:27.070861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.621 [2024-05-15 20:29:27.071487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.071774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.071784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.621 [2024-05-15 20:29:27.071792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.621 [2024-05-15 20:29:27.072015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.621 [2024-05-15 20:29:27.072238] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.621 [2024-05-15 20:29:27.072246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.621 [2024-05-15 20:29:27.072253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.621 [2024-05-15 20:29:27.075860] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.621 [2024-05-15 20:29:27.084768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.621 [2024-05-15 20:29:27.085528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.085931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.085944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.621 [2024-05-15 20:29:27.085954] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.621 [2024-05-15 20:29:27.086204] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.621 [2024-05-15 20:29:27.086437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.621 [2024-05-15 20:29:27.086447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.621 [2024-05-15 20:29:27.086455] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.621 [2024-05-15 20:29:27.090061] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.621 [2024-05-15 20:29:27.098765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.621 [2024-05-15 20:29:27.099543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.099948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.099961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.621 [2024-05-15 20:29:27.099976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.621 [2024-05-15 20:29:27.100223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.621 [2024-05-15 20:29:27.100456] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.621 [2024-05-15 20:29:27.100467] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.621 [2024-05-15 20:29:27.100474] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.621 [2024-05-15 20:29:27.104084] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.621 [2024-05-15 20:29:27.112790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.621 [2024-05-15 20:29:27.113443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.113858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.621 [2024-05-15 20:29:27.113868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.621 [2024-05-15 20:29:27.113875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.621 [2024-05-15 20:29:27.114097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.621 [2024-05-15 20:29:27.114324] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.621 [2024-05-15 20:29:27.114332] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.621 [2024-05-15 20:29:27.114338] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.621 [2024-05-15 20:29:27.117937] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.883 [2024-05-15 20:29:27.126632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.883 [2024-05-15 20:29:27.127159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.883 [2024-05-15 20:29:27.127532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.883 [2024-05-15 20:29:27.127542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.883 [2024-05-15 20:29:27.127550] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.883 [2024-05-15 20:29:27.127772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.883 [2024-05-15 20:29:27.127994] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.884 [2024-05-15 20:29:27.128002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.884 [2024-05-15 20:29:27.128009] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.884 [2024-05-15 20:29:27.131606] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.884 [2024-05-15 20:29:27.140513] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.884 [2024-05-15 20:29:27.140977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.141351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.141362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.884 [2024-05-15 20:29:27.141369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.884 [2024-05-15 20:29:27.141598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.884 [2024-05-15 20:29:27.141819] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.884 [2024-05-15 20:29:27.141826] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.884 [2024-05-15 20:29:27.141833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.884 [2024-05-15 20:29:27.145430] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.884 [2024-05-15 20:29:27.154541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.884 [2024-05-15 20:29:27.155268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.155737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.155750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.884 [2024-05-15 20:29:27.155760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.884 [2024-05-15 20:29:27.156002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.884 [2024-05-15 20:29:27.156227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.884 [2024-05-15 20:29:27.156235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.884 [2024-05-15 20:29:27.156242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.884 [2024-05-15 20:29:27.159843] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.884 [2024-05-15 20:29:27.168534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.884 [2024-05-15 20:29:27.169173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.169539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.169549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.884 [2024-05-15 20:29:27.169557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.884 [2024-05-15 20:29:27.169778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.884 [2024-05-15 20:29:27.169999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.884 [2024-05-15 20:29:27.170006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.884 [2024-05-15 20:29:27.170013] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.884 [2024-05-15 20:29:27.173607] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.884 [2024-05-15 20:29:27.182499] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.884 [2024-05-15 20:29:27.183224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.183680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.183695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.884 [2024-05-15 20:29:27.183705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.884 [2024-05-15 20:29:27.183947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.884 [2024-05-15 20:29:27.184176] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.884 [2024-05-15 20:29:27.184185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.884 [2024-05-15 20:29:27.184192] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.884 [2024-05-15 20:29:27.187795] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.884 [2024-05-15 20:29:27.196482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.884 [2024-05-15 20:29:27.197124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.197395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.197406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.884 [2024-05-15 20:29:27.197414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.884 [2024-05-15 20:29:27.197636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.884 [2024-05-15 20:29:27.197857] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.884 [2024-05-15 20:29:27.197865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.884 [2024-05-15 20:29:27.197872] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.884 [2024-05-15 20:29:27.201463] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.884 [2024-05-15 20:29:27.210355] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.884 [2024-05-15 20:29:27.210885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.211286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.211295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.884 [2024-05-15 20:29:27.211302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.884 [2024-05-15 20:29:27.211528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.884 [2024-05-15 20:29:27.211750] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.884 [2024-05-15 20:29:27.211757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.884 [2024-05-15 20:29:27.211763] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.884 [2024-05-15 20:29:27.215351] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.884 [2024-05-15 20:29:27.224238] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.884 [2024-05-15 20:29:27.224858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.225254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.225264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.884 [2024-05-15 20:29:27.225271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.884 [2024-05-15 20:29:27.225496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.884 [2024-05-15 20:29:27.225717] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.884 [2024-05-15 20:29:27.225728] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.884 [2024-05-15 20:29:27.225735] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.884 [2024-05-15 20:29:27.229327] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.884 [2024-05-15 20:29:27.238223] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.884 [2024-05-15 20:29:27.238874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.239258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.239270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.884 [2024-05-15 20:29:27.239280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.884 [2024-05-15 20:29:27.239528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.884 [2024-05-15 20:29:27.239753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.884 [2024-05-15 20:29:27.239762] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.884 [2024-05-15 20:29:27.239769] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.884 [2024-05-15 20:29:27.243367] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.884 [2024-05-15 20:29:27.252260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.884 [2024-05-15 20:29:27.252954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.253356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.884 [2024-05-15 20:29:27.253371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.884 [2024-05-15 20:29:27.253380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.884 [2024-05-15 20:29:27.253621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.884 [2024-05-15 20:29:27.253845] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.884 [2024-05-15 20:29:27.253854] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.884 [2024-05-15 20:29:27.253861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.884 [2024-05-15 20:29:27.257468] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.884 [2024-05-15 20:29:27.266153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.885 [2024-05-15 20:29:27.266619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.267037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.267046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.885 [2024-05-15 20:29:27.267054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.885 [2024-05-15 20:29:27.267278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.885 [2024-05-15 20:29:27.267505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.885 [2024-05-15 20:29:27.267513] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.885 [2024-05-15 20:29:27.267523] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.885 [2024-05-15 20:29:27.271112] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.885 [2024-05-15 20:29:27.280001] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.885 [2024-05-15 20:29:27.280572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.280962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.280971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.885 [2024-05-15 20:29:27.280978] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.885 [2024-05-15 20:29:27.281200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.885 [2024-05-15 20:29:27.281424] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.885 [2024-05-15 20:29:27.281432] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.885 [2024-05-15 20:29:27.281439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.885 [2024-05-15 20:29:27.285030] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.885 [2024-05-15 20:29:27.293925] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.885 [2024-05-15 20:29:27.294515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.294878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.294887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.885 [2024-05-15 20:29:27.294895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.885 [2024-05-15 20:29:27.295116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.885 [2024-05-15 20:29:27.295340] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.885 [2024-05-15 20:29:27.295348] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.885 [2024-05-15 20:29:27.295355] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.885 [2024-05-15 20:29:27.298945] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.885 [2024-05-15 20:29:27.307835] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.885 [2024-05-15 20:29:27.308576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.308964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.308977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.885 [2024-05-15 20:29:27.308986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.885 [2024-05-15 20:29:27.309227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.885 [2024-05-15 20:29:27.309459] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.885 [2024-05-15 20:29:27.309467] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.885 [2024-05-15 20:29:27.309475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.885 [2024-05-15 20:29:27.313070] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.885 [2024-05-15 20:29:27.321759] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.885 [2024-05-15 20:29:27.322436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.322836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.322849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.885 [2024-05-15 20:29:27.322858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.885 [2024-05-15 20:29:27.323099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.885 [2024-05-15 20:29:27.323331] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.885 [2024-05-15 20:29:27.323340] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.885 [2024-05-15 20:29:27.323347] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.885 [2024-05-15 20:29:27.326942] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.885 [2024-05-15 20:29:27.335623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.885 [2024-05-15 20:29:27.336088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.336447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.336457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.885 [2024-05-15 20:29:27.336465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.885 [2024-05-15 20:29:27.336687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.885 [2024-05-15 20:29:27.336907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.885 [2024-05-15 20:29:27.336915] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.885 [2024-05-15 20:29:27.336922] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.885 [2024-05-15 20:29:27.340525] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.885 [2024-05-15 20:29:27.349631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.885 [2024-05-15 20:29:27.350363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.350818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.350831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.885 [2024-05-15 20:29:27.350841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.885 [2024-05-15 20:29:27.351082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.885 [2024-05-15 20:29:27.351307] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.885 [2024-05-15 20:29:27.351324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.885 [2024-05-15 20:29:27.351331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.885 [2024-05-15 20:29:27.354925] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.885 [2024-05-15 20:29:27.363618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.885 [2024-05-15 20:29:27.364355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.364749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.364762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.885 [2024-05-15 20:29:27.364771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.885 [2024-05-15 20:29:27.365011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.885 [2024-05-15 20:29:27.365236] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.885 [2024-05-15 20:29:27.365244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.885 [2024-05-15 20:29:27.365251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.885 [2024-05-15 20:29:27.368852] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:34.885 [2024-05-15 20:29:27.377539] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:34.885 [2024-05-15 20:29:27.378181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.378554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:34.885 [2024-05-15 20:29:27.378565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:34.885 [2024-05-15 20:29:27.378572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:34.885 [2024-05-15 20:29:27.378794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:34.885 [2024-05-15 20:29:27.379015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:34.885 [2024-05-15 20:29:27.379022] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:34.885 [2024-05-15 20:29:27.379028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:34.885 [2024-05-15 20:29:27.382622] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.147 [2024-05-15 20:29:27.391516] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.147 [2024-05-15 20:29:27.392081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.147 [2024-05-15 20:29:27.392382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.147 [2024-05-15 20:29:27.392397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.147 [2024-05-15 20:29:27.392406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.147 [2024-05-15 20:29:27.392647] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.147 [2024-05-15 20:29:27.392872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.147 [2024-05-15 20:29:27.392880] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.147 [2024-05-15 20:29:27.392887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.147 [2024-05-15 20:29:27.396486] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.147 [2024-05-15 20:29:27.405382] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.147 [2024-05-15 20:29:27.405896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.147 [2024-05-15 20:29:27.406291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.147 [2024-05-15 20:29:27.406301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.147 [2024-05-15 20:29:27.406309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.147 [2024-05-15 20:29:27.406537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.147 [2024-05-15 20:29:27.406757] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.147 [2024-05-15 20:29:27.406765] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.147 [2024-05-15 20:29:27.406771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.147 [2024-05-15 20:29:27.410367] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.147 [2024-05-15 20:29:27.419256] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.147 [2024-05-15 20:29:27.419848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.147 [2024-05-15 20:29:27.420248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.147 [2024-05-15 20:29:27.420257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.147 [2024-05-15 20:29:27.420264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.147 [2024-05-15 20:29:27.420489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.147 [2024-05-15 20:29:27.420710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.147 [2024-05-15 20:29:27.420717] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.147 [2024-05-15 20:29:27.420724] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.147 [2024-05-15 20:29:27.424308] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.147 [2024-05-15 20:29:27.433201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.147 [2024-05-15 20:29:27.433820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.147 [2024-05-15 20:29:27.434209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.147 [2024-05-15 20:29:27.434223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.147 [2024-05-15 20:29:27.434232] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.147 [2024-05-15 20:29:27.434480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.147 [2024-05-15 20:29:27.434706] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.147 [2024-05-15 20:29:27.434714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.147 [2024-05-15 20:29:27.434721] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.147 [2024-05-15 20:29:27.438326] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.147 [2024-05-15 20:29:27.447221] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.147 [2024-05-15 20:29:27.447911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.147 [2024-05-15 20:29:27.448297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.147 [2024-05-15 20:29:27.448320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.147 [2024-05-15 20:29:27.448330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.147 [2024-05-15 20:29:27.448572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.147 [2024-05-15 20:29:27.448797] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.147 [2024-05-15 20:29:27.448804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.147 [2024-05-15 20:29:27.448812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.147 [2024-05-15 20:29:27.452408] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.147 [2024-05-15 20:29:27.461094] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.147 [2024-05-15 20:29:27.461856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.147 [2024-05-15 20:29:27.462234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.147 [2024-05-15 20:29:27.462247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.147 [2024-05-15 20:29:27.462256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.148 [2024-05-15 20:29:27.462505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.148 [2024-05-15 20:29:27.462730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.148 [2024-05-15 20:29:27.462738] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.148 [2024-05-15 20:29:27.462745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.148 [2024-05-15 20:29:27.466349] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.148 [2024-05-15 20:29:27.475115] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.148 [2024-05-15 20:29:27.475907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.476200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.476212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.148 [2024-05-15 20:29:27.476222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.148 [2024-05-15 20:29:27.476469] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.148 [2024-05-15 20:29:27.476694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.148 [2024-05-15 20:29:27.476702] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.148 [2024-05-15 20:29:27.476709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.148 [2024-05-15 20:29:27.480300] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.148 [2024-05-15 20:29:27.488984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.148 [2024-05-15 20:29:27.489739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.490060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.490072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.148 [2024-05-15 20:29:27.490091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.148 [2024-05-15 20:29:27.490337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.148 [2024-05-15 20:29:27.490563] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.148 [2024-05-15 20:29:27.490571] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.148 [2024-05-15 20:29:27.490578] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.148 [2024-05-15 20:29:27.494173] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.148 [2024-05-15 20:29:27.502863] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.148 [2024-05-15 20:29:27.503612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.504104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.504117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.148 [2024-05-15 20:29:27.504126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.148 [2024-05-15 20:29:27.504372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.148 [2024-05-15 20:29:27.504597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.148 [2024-05-15 20:29:27.504605] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.148 [2024-05-15 20:29:27.504612] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.148 [2024-05-15 20:29:27.508203] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.148 [2024-05-15 20:29:27.516893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.148 [2024-05-15 20:29:27.517630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.518078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.518090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.148 [2024-05-15 20:29:27.518099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.148 [2024-05-15 20:29:27.518346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.148 [2024-05-15 20:29:27.518571] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.148 [2024-05-15 20:29:27.518579] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.148 [2024-05-15 20:29:27.518587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.148 [2024-05-15 20:29:27.522186] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.148 [2024-05-15 20:29:27.530876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.148 [2024-05-15 20:29:27.531531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.531921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.531933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.148 [2024-05-15 20:29:27.531942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.148 [2024-05-15 20:29:27.532187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.148 [2024-05-15 20:29:27.532419] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.148 [2024-05-15 20:29:27.532428] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.148 [2024-05-15 20:29:27.532435] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.148 [2024-05-15 20:29:27.536032] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.148 [2024-05-15 20:29:27.544733] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.148 [2024-05-15 20:29:27.545372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.545832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.545842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.148 [2024-05-15 20:29:27.545849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.148 [2024-05-15 20:29:27.546070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.148 [2024-05-15 20:29:27.546291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.148 [2024-05-15 20:29:27.546299] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.148 [2024-05-15 20:29:27.546305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.148 [2024-05-15 20:29:27.549900] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.148 [2024-05-15 20:29:27.558580] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.148 [2024-05-15 20:29:27.559300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.559725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.559737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.148 [2024-05-15 20:29:27.559747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.148 [2024-05-15 20:29:27.559988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.148 [2024-05-15 20:29:27.560212] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.148 [2024-05-15 20:29:27.560220] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.148 [2024-05-15 20:29:27.560227] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.148 [2024-05-15 20:29:27.563828] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.148 [2024-05-15 20:29:27.572516] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.148 [2024-05-15 20:29:27.573198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.573642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.573656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.148 [2024-05-15 20:29:27.573665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.148 [2024-05-15 20:29:27.573906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.148 [2024-05-15 20:29:27.574135] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.148 [2024-05-15 20:29:27.574143] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.148 [2024-05-15 20:29:27.574151] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.148 [2024-05-15 20:29:27.577747] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.148 [2024-05-15 20:29:27.586432] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.148 [2024-05-15 20:29:27.587117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.587506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.587520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.148 [2024-05-15 20:29:27.587529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.148 [2024-05-15 20:29:27.587770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.148 [2024-05-15 20:29:27.587995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.148 [2024-05-15 20:29:27.588003] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.148 [2024-05-15 20:29:27.588010] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.148 [2024-05-15 20:29:27.591611] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.148 [2024-05-15 20:29:27.600293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.148 [2024-05-15 20:29:27.601028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.148 [2024-05-15 20:29:27.601417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.149 [2024-05-15 20:29:27.601430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.149 [2024-05-15 20:29:27.601440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.149 [2024-05-15 20:29:27.601681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.149 [2024-05-15 20:29:27.601905] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.149 [2024-05-15 20:29:27.601913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.149 [2024-05-15 20:29:27.601920] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.149 [2024-05-15 20:29:27.605518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.149 [2024-05-15 20:29:27.614194] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.149 [2024-05-15 20:29:27.614815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.149 [2024-05-15 20:29:27.615202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.149 [2024-05-15 20:29:27.615215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.149 [2024-05-15 20:29:27.615224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.149 [2024-05-15 20:29:27.615473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.149 [2024-05-15 20:29:27.615698] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.149 [2024-05-15 20:29:27.615711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.149 [2024-05-15 20:29:27.615718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.149 [2024-05-15 20:29:27.619316] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.149 [2024-05-15 20:29:27.628208] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.149 [2024-05-15 20:29:27.628904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.149 [2024-05-15 20:29:27.629288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.149 [2024-05-15 20:29:27.629301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.149 [2024-05-15 20:29:27.629310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.149 [2024-05-15 20:29:27.629559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.149 [2024-05-15 20:29:27.629784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.149 [2024-05-15 20:29:27.629792] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.149 [2024-05-15 20:29:27.629800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.149 [2024-05-15 20:29:27.633395] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.149 [2024-05-15 20:29:27.642083] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.149 [2024-05-15 20:29:27.642774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.149 [2024-05-15 20:29:27.643163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.149 [2024-05-15 20:29:27.643176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.149 [2024-05-15 20:29:27.643185] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.149 [2024-05-15 20:29:27.643434] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.149 [2024-05-15 20:29:27.643660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.149 [2024-05-15 20:29:27.643668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.149 [2024-05-15 20:29:27.643675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.410 [2024-05-15 20:29:27.647268] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.410 [2024-05-15 20:29:27.655956] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.410 [2024-05-15 20:29:27.656633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.410 [2024-05-15 20:29:27.657019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.410 [2024-05-15 20:29:27.657032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.411 [2024-05-15 20:29:27.657041] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.411 [2024-05-15 20:29:27.657282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.411 [2024-05-15 20:29:27.657516] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.411 [2024-05-15 20:29:27.657524] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.411 [2024-05-15 20:29:27.657536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.411 [2024-05-15 20:29:27.661126] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.411 [2024-05-15 20:29:27.669810] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.411 [2024-05-15 20:29:27.670550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.670873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.670885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.411 [2024-05-15 20:29:27.670895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.411 [2024-05-15 20:29:27.671135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.411 [2024-05-15 20:29:27.671370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.411 [2024-05-15 20:29:27.671379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.411 [2024-05-15 20:29:27.671387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.411 [2024-05-15 20:29:27.674988] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.411 [2024-05-15 20:29:27.683672] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.411 [2024-05-15 20:29:27.684408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.684807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.684819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.411 [2024-05-15 20:29:27.684828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.411 [2024-05-15 20:29:27.685069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.411 [2024-05-15 20:29:27.685294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.411 [2024-05-15 20:29:27.685302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.411 [2024-05-15 20:29:27.685309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.411 [2024-05-15 20:29:27.688911] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.411 [2024-05-15 20:29:27.697593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.411 [2024-05-15 20:29:27.698277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.698666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.698679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.411 [2024-05-15 20:29:27.698688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.411 [2024-05-15 20:29:27.698929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.411 [2024-05-15 20:29:27.699153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.411 [2024-05-15 20:29:27.699161] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.411 [2024-05-15 20:29:27.699168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.411 [2024-05-15 20:29:27.702764] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.411 [2024-05-15 20:29:27.711450] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.411 [2024-05-15 20:29:27.712108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.712495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.712510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.411 [2024-05-15 20:29:27.712520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.411 [2024-05-15 20:29:27.712761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.411 [2024-05-15 20:29:27.712986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.411 [2024-05-15 20:29:27.712994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.411 [2024-05-15 20:29:27.713001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.411 [2024-05-15 20:29:27.716602] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.411 [2024-05-15 20:29:27.725287] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.411 [2024-05-15 20:29:27.726020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.726406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.726420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.411 [2024-05-15 20:29:27.726429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.411 [2024-05-15 20:29:27.726670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.411 [2024-05-15 20:29:27.726895] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.411 [2024-05-15 20:29:27.726903] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.411 [2024-05-15 20:29:27.726910] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.411 [2024-05-15 20:29:27.730508] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.411 [2024-05-15 20:29:27.739196] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.411 [2024-05-15 20:29:27.739847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.740207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.740216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.411 [2024-05-15 20:29:27.740224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.411 [2024-05-15 20:29:27.740450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.411 [2024-05-15 20:29:27.740671] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.411 [2024-05-15 20:29:27.740678] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.411 [2024-05-15 20:29:27.740685] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.411 [2024-05-15 20:29:27.744276] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.411 [2024-05-15 20:29:27.753169] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.411 [2024-05-15 20:29:27.753899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.754278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.754291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.411 [2024-05-15 20:29:27.754300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.411 [2024-05-15 20:29:27.754547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.411 [2024-05-15 20:29:27.754773] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.411 [2024-05-15 20:29:27.754781] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.411 [2024-05-15 20:29:27.754788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.411 [2024-05-15 20:29:27.758387] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.411 [2024-05-15 20:29:27.767068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.411 [2024-05-15 20:29:27.767660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.767904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.767916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.411 [2024-05-15 20:29:27.767926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.411 [2024-05-15 20:29:27.768166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.411 [2024-05-15 20:29:27.768400] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.411 [2024-05-15 20:29:27.768408] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.411 [2024-05-15 20:29:27.768416] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.411 [2024-05-15 20:29:27.772009] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.411 [2024-05-15 20:29:27.781117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.411 [2024-05-15 20:29:27.781806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.782133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.411 [2024-05-15 20:29:27.782146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.411 [2024-05-15 20:29:27.782155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.411 [2024-05-15 20:29:27.782404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.411 [2024-05-15 20:29:27.782630] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.411 [2024-05-15 20:29:27.782637] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.411 [2024-05-15 20:29:27.782645] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.411 [2024-05-15 20:29:27.786238] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.412 [2024-05-15 20:29:27.795132] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.412 [2024-05-15 20:29:27.795895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.796280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.796292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.412 [2024-05-15 20:29:27.796301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.412 [2024-05-15 20:29:27.796550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.412 [2024-05-15 20:29:27.796776] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.412 [2024-05-15 20:29:27.796784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.412 [2024-05-15 20:29:27.796791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.412 [2024-05-15 20:29:27.800386] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.412 [2024-05-15 20:29:27.809069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.412 [2024-05-15 20:29:27.809737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.810124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.810136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.412 [2024-05-15 20:29:27.810145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.412 [2024-05-15 20:29:27.810395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.412 [2024-05-15 20:29:27.810620] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.412 [2024-05-15 20:29:27.810628] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.412 [2024-05-15 20:29:27.810635] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.412 [2024-05-15 20:29:27.814229] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.412 [2024-05-15 20:29:27.822916] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.412 [2024-05-15 20:29:27.823582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.823972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.823984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.412 [2024-05-15 20:29:27.823993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.412 [2024-05-15 20:29:27.824234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.412 [2024-05-15 20:29:27.824465] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.412 [2024-05-15 20:29:27.824474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.412 [2024-05-15 20:29:27.824482] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.412 [2024-05-15 20:29:27.828145] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.412 [2024-05-15 20:29:27.836836] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.412 [2024-05-15 20:29:27.837443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.837742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.837759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.412 [2024-05-15 20:29:27.837768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.412 [2024-05-15 20:29:27.838009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.412 [2024-05-15 20:29:27.838234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.412 [2024-05-15 20:29:27.838242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.412 [2024-05-15 20:29:27.838250] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.412 [2024-05-15 20:29:27.841856] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.412 [2024-05-15 20:29:27.850753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.412 [2024-05-15 20:29:27.851418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.851811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.851823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.412 [2024-05-15 20:29:27.851832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.412 [2024-05-15 20:29:27.852073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.412 [2024-05-15 20:29:27.852297] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.412 [2024-05-15 20:29:27.852305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.412 [2024-05-15 20:29:27.852320] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.412 [2024-05-15 20:29:27.855911] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.412 [2024-05-15 20:29:27.864595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.412 [2024-05-15 20:29:27.865258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.865654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.865668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.412 [2024-05-15 20:29:27.865677] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.412 [2024-05-15 20:29:27.865918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.412 [2024-05-15 20:29:27.866143] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.412 [2024-05-15 20:29:27.866150] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.412 [2024-05-15 20:29:27.866158] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.412 [2024-05-15 20:29:27.869757] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.412 [2024-05-15 20:29:27.878441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.412 [2024-05-15 20:29:27.879167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.879463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.879478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.412 [2024-05-15 20:29:27.879491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.412 [2024-05-15 20:29:27.879733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.412 [2024-05-15 20:29:27.879958] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.412 [2024-05-15 20:29:27.879967] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.412 [2024-05-15 20:29:27.879975] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.412 [2024-05-15 20:29:27.883582] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.412 [2024-05-15 20:29:27.892487] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.412 [2024-05-15 20:29:27.893082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.893462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.893472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.412 [2024-05-15 20:29:27.893479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.412 [2024-05-15 20:29:27.893701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.412 [2024-05-15 20:29:27.893922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.412 [2024-05-15 20:29:27.893929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.412 [2024-05-15 20:29:27.893935] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.412 [2024-05-15 20:29:27.897528] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.412 [2024-05-15 20:29:27.906416] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.412 [2024-05-15 20:29:27.907112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.907419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.412 [2024-05-15 20:29:27.907433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.412 [2024-05-15 20:29:27.907443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.412 [2024-05-15 20:29:27.907684] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.412 [2024-05-15 20:29:27.907908] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.412 [2024-05-15 20:29:27.907917] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.412 [2024-05-15 20:29:27.907924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.674 [2024-05-15 20:29:27.911525] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.674 [2024-05-15 20:29:27.920422] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.674 [2024-05-15 20:29:27.921019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.674 [2024-05-15 20:29:27.921428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.674 [2024-05-15 20:29:27.921438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.674 [2024-05-15 20:29:27.921446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.674 [2024-05-15 20:29:27.921672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.674 [2024-05-15 20:29:27.921894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.674 [2024-05-15 20:29:27.921901] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.674 [2024-05-15 20:29:27.921907] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.674 [2024-05-15 20:29:27.925501] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.674 [2024-05-15 20:29:27.934393] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.674 [2024-05-15 20:29:27.935114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.674 [2024-05-15 20:29:27.935516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.674 [2024-05-15 20:29:27.935529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.674 [2024-05-15 20:29:27.935538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.674 [2024-05-15 20:29:27.935779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.674 [2024-05-15 20:29:27.936004] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.674 [2024-05-15 20:29:27.936011] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.674 [2024-05-15 20:29:27.936019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.674 [2024-05-15 20:29:27.939630] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.674 [2024-05-15 20:29:27.948311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.674 [2024-05-15 20:29:27.949009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.674 [2024-05-15 20:29:27.949392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.674 [2024-05-15 20:29:27.949405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.674 [2024-05-15 20:29:27.949415] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.674 [2024-05-15 20:29:27.949655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.674 [2024-05-15 20:29:27.949880] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.674 [2024-05-15 20:29:27.949887] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.674 [2024-05-15 20:29:27.949895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.674 [2024-05-15 20:29:27.953492] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.674 [2024-05-15 20:29:27.962174] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.674 [2024-05-15 20:29:27.962760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.674 [2024-05-15 20:29:27.963167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.674 [2024-05-15 20:29:27.963179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.675 [2024-05-15 20:29:27.963188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.675 [2024-05-15 20:29:27.963438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.675 [2024-05-15 20:29:27.963668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.675 [2024-05-15 20:29:27.963676] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.675 [2024-05-15 20:29:27.963683] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.675 [2024-05-15 20:29:27.967283] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.675 [2024-05-15 20:29:27.976178] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.675 [2024-05-15 20:29:27.976896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:27.977293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:27.977306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.675 [2024-05-15 20:29:27.977322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.675 [2024-05-15 20:29:27.977563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.675 [2024-05-15 20:29:27.977788] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.675 [2024-05-15 20:29:27.977795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.675 [2024-05-15 20:29:27.977803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.675 [2024-05-15 20:29:27.981397] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.675 [2024-05-15 20:29:27.990083] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.675 [2024-05-15 20:29:27.990795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:27.991182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:27.991194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.675 [2024-05-15 20:29:27.991203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.675 [2024-05-15 20:29:27.991453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.675 [2024-05-15 20:29:27.991678] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.675 [2024-05-15 20:29:27.991687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.675 [2024-05-15 20:29:27.991695] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.675 [2024-05-15 20:29:27.995287] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.675 [2024-05-15 20:29:28.003970] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.675 [2024-05-15 20:29:28.004679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:28.005063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:28.005075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.675 [2024-05-15 20:29:28.005084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.675 [2024-05-15 20:29:28.005333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.675 [2024-05-15 20:29:28.005559] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.675 [2024-05-15 20:29:28.005571] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.675 [2024-05-15 20:29:28.005579] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.675 [2024-05-15 20:29:28.009174] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.675 [2024-05-15 20:29:28.017859] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.675 [2024-05-15 20:29:28.018568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:28.018954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:28.018966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.675 [2024-05-15 20:29:28.018975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.675 [2024-05-15 20:29:28.019216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.675 [2024-05-15 20:29:28.019448] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.675 [2024-05-15 20:29:28.019457] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.675 [2024-05-15 20:29:28.019464] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.675 [2024-05-15 20:29:28.023058] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.675 [2024-05-15 20:29:28.031742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.675 [2024-05-15 20:29:28.032420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:28.032822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:28.032834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.675 [2024-05-15 20:29:28.032843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.675 [2024-05-15 20:29:28.033084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.675 [2024-05-15 20:29:28.033308] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.675 [2024-05-15 20:29:28.033326] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.675 [2024-05-15 20:29:28.033334] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.675 [2024-05-15 20:29:28.036923] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.675 [2024-05-15 20:29:28.045617] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.675 [2024-05-15 20:29:28.046282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:28.046680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:28.046693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.675 [2024-05-15 20:29:28.046703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.675 [2024-05-15 20:29:28.046943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.675 [2024-05-15 20:29:28.047167] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.675 [2024-05-15 20:29:28.047176] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.675 [2024-05-15 20:29:28.047187] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.675 [2024-05-15 20:29:28.050787] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.675 [2024-05-15 20:29:28.059473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.675 [2024-05-15 20:29:28.060173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:28.060471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:28.060486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.675 [2024-05-15 20:29:28.060495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.675 [2024-05-15 20:29:28.060737] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.675 [2024-05-15 20:29:28.060962] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.675 [2024-05-15 20:29:28.060970] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.675 [2024-05-15 20:29:28.060977] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.675 [2024-05-15 20:29:28.064571] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.675 [2024-05-15 20:29:28.073465] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.675 [2024-05-15 20:29:28.074143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:28.074527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:28.074541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.675 [2024-05-15 20:29:28.074550] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.675 [2024-05-15 20:29:28.074791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.675 [2024-05-15 20:29:28.075015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.675 [2024-05-15 20:29:28.075023] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.675 [2024-05-15 20:29:28.075030] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.675 [2024-05-15 20:29:28.078628] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.675 [2024-05-15 20:29:28.087309] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.675 [2024-05-15 20:29:28.088040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:28.088280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.675 [2024-05-15 20:29:28.088294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.675 [2024-05-15 20:29:28.088304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.675 [2024-05-15 20:29:28.088553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.675 [2024-05-15 20:29:28.088780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.675 [2024-05-15 20:29:28.088789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.675 [2024-05-15 20:29:28.088796] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.675 [2024-05-15 20:29:28.092416] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.675 [2024-05-15 20:29:28.101309] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.675 [2024-05-15 20:29:28.102044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.676 [2024-05-15 20:29:28.102356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.676 [2024-05-15 20:29:28.102370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.676 [2024-05-15 20:29:28.102380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.676 [2024-05-15 20:29:28.102620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.676 [2024-05-15 20:29:28.102844] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.676 [2024-05-15 20:29:28.102852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.676 [2024-05-15 20:29:28.102859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.676 [2024-05-15 20:29:28.106456] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.676 [2024-05-15 20:29:28.115346] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.676 [2024-05-15 20:29:28.116043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.676 [2024-05-15 20:29:28.116428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.676 [2024-05-15 20:29:28.116443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.676 [2024-05-15 20:29:28.116452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.676 [2024-05-15 20:29:28.116693] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.676 [2024-05-15 20:29:28.116918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.676 [2024-05-15 20:29:28.116926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.676 [2024-05-15 20:29:28.116933] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.676 [2024-05-15 20:29:28.120531] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.676 [2024-05-15 20:29:28.129215] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.676 [2024-05-15 20:29:28.129893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.676 [2024-05-15 20:29:28.130272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.676 [2024-05-15 20:29:28.130285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.676 [2024-05-15 20:29:28.130294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.676 [2024-05-15 20:29:28.130542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.676 [2024-05-15 20:29:28.130767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.676 [2024-05-15 20:29:28.130775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.676 [2024-05-15 20:29:28.130783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.676 [2024-05-15 20:29:28.134374] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.676 [2024-05-15 20:29:28.143066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.676 [2024-05-15 20:29:28.143806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.676 [2024-05-15 20:29:28.144277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.676 [2024-05-15 20:29:28.144289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.676 [2024-05-15 20:29:28.144299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.676 [2024-05-15 20:29:28.144548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.676 [2024-05-15 20:29:28.144774] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.676 [2024-05-15 20:29:28.144782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.676 [2024-05-15 20:29:28.144789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.676 [2024-05-15 20:29:28.148384] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.676 [2024-05-15 20:29:28.157063] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.676 [2024-05-15 20:29:28.157800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.676 [2024-05-15 20:29:28.158194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.676 [2024-05-15 20:29:28.158207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.676 [2024-05-15 20:29:28.158216] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.676 [2024-05-15 20:29:28.158465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.676 [2024-05-15 20:29:28.158691] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.676 [2024-05-15 20:29:28.158698] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.676 [2024-05-15 20:29:28.158706] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.676 [2024-05-15 20:29:28.162300] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.676 [2024-05-15 20:29:28.170983] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.676 [2024-05-15 20:29:28.171683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.676 [2024-05-15 20:29:28.172066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.676 [2024-05-15 20:29:28.172079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.676 [2024-05-15 20:29:28.172088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.676 [2024-05-15 20:29:28.172339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.676 [2024-05-15 20:29:28.172565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.676 [2024-05-15 20:29:28.172572] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.676 [2024-05-15 20:29:28.172580] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.938 [2024-05-15 20:29:28.176173] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.938 [2024-05-15 20:29:28.184859] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.938 [2024-05-15 20:29:28.185587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-05-15 20:29:28.185974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.938 [2024-05-15 20:29:28.185987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.938 [2024-05-15 20:29:28.185997] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.938 [2024-05-15 20:29:28.186237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.938 [2024-05-15 20:29:28.186470] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.938 [2024-05-15 20:29:28.186478] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.939 [2024-05-15 20:29:28.186485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.939 [2024-05-15 20:29:28.190080] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.939 [2024-05-15 20:29:28.198766] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.939 [2024-05-15 20:29:28.199421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.199862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.199875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.939 [2024-05-15 20:29:28.199884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.939 [2024-05-15 20:29:28.200125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.939 [2024-05-15 20:29:28.200358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.939 [2024-05-15 20:29:28.200367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.939 [2024-05-15 20:29:28.200374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.939 [2024-05-15 20:29:28.203968] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.939 [2024-05-15 20:29:28.212651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.939 [2024-05-15 20:29:28.213374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.213763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.213776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.939 [2024-05-15 20:29:28.213786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.939 [2024-05-15 20:29:28.214027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.939 [2024-05-15 20:29:28.214252] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.939 [2024-05-15 20:29:28.214260] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.939 [2024-05-15 20:29:28.214267] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.939 [2024-05-15 20:29:28.217868] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.939 [2024-05-15 20:29:28.226556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.939 [2024-05-15 20:29:28.227284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.227729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.227747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.939 [2024-05-15 20:29:28.227757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.939 [2024-05-15 20:29:28.227998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.939 [2024-05-15 20:29:28.228222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.939 [2024-05-15 20:29:28.228230] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.939 [2024-05-15 20:29:28.228237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.939 [2024-05-15 20:29:28.231833] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.939 [2024-05-15 20:29:28.240526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.939 [2024-05-15 20:29:28.241212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.241669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.241683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.939 [2024-05-15 20:29:28.241692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.939 [2024-05-15 20:29:28.241933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.939 [2024-05-15 20:29:28.242158] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.939 [2024-05-15 20:29:28.242165] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.939 [2024-05-15 20:29:28.242173] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.939 [2024-05-15 20:29:28.245770] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.939 [2024-05-15 20:29:28.254460] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.939 [2024-05-15 20:29:28.255124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.255514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.255529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.939 [2024-05-15 20:29:28.255538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.939 [2024-05-15 20:29:28.255778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.939 [2024-05-15 20:29:28.256003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.939 [2024-05-15 20:29:28.256011] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.939 [2024-05-15 20:29:28.256018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.939 [2024-05-15 20:29:28.259614] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.939 [2024-05-15 20:29:28.268295] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.939 [2024-05-15 20:29:28.268878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.269264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.269277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.939 [2024-05-15 20:29:28.269290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.939 [2024-05-15 20:29:28.269541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.939 [2024-05-15 20:29:28.269767] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.939 [2024-05-15 20:29:28.269775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.939 [2024-05-15 20:29:28.269782] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.939 [2024-05-15 20:29:28.273377] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.939 [2024-05-15 20:29:28.282266] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.939 [2024-05-15 20:29:28.282977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.283365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.283379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.939 [2024-05-15 20:29:28.283388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.939 [2024-05-15 20:29:28.283629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.939 [2024-05-15 20:29:28.283853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.939 [2024-05-15 20:29:28.283861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.939 [2024-05-15 20:29:28.283868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.939 [2024-05-15 20:29:28.287461] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.939 [2024-05-15 20:29:28.296140] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.939 [2024-05-15 20:29:28.296754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.297138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.297151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.939 [2024-05-15 20:29:28.297160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.939 [2024-05-15 20:29:28.297412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.939 [2024-05-15 20:29:28.297638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.939 [2024-05-15 20:29:28.297646] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.939 [2024-05-15 20:29:28.297653] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.939 [2024-05-15 20:29:28.301248] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.939 [2024-05-15 20:29:28.310145] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.939 [2024-05-15 20:29:28.310839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.311224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.311237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.939 [2024-05-15 20:29:28.311246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.939 [2024-05-15 20:29:28.311500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.939 [2024-05-15 20:29:28.311726] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.939 [2024-05-15 20:29:28.311734] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.939 [2024-05-15 20:29:28.311741] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.939 [2024-05-15 20:29:28.315340] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.939 [2024-05-15 20:29:28.324031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.939 [2024-05-15 20:29:28.324733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.939 [2024-05-15 20:29:28.325125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.325137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.940 [2024-05-15 20:29:28.325146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.940 [2024-05-15 20:29:28.325395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.940 [2024-05-15 20:29:28.325621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.940 [2024-05-15 20:29:28.325629] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.940 [2024-05-15 20:29:28.325636] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.940 [2024-05-15 20:29:28.329232] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.940 [2024-05-15 20:29:28.337921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.940 [2024-05-15 20:29:28.338537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.338923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.338935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.940 [2024-05-15 20:29:28.338945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.940 [2024-05-15 20:29:28.339186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.940 [2024-05-15 20:29:28.339428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.940 [2024-05-15 20:29:28.339437] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.940 [2024-05-15 20:29:28.339444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.940 [2024-05-15 20:29:28.343038] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.940 [2024-05-15 20:29:28.351932] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.940 [2024-05-15 20:29:28.352615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.352998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.353010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.940 [2024-05-15 20:29:28.353019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.940 [2024-05-15 20:29:28.353260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.940 [2024-05-15 20:29:28.353499] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.940 [2024-05-15 20:29:28.353508] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.940 [2024-05-15 20:29:28.353516] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.940 [2024-05-15 20:29:28.357108] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.940 [2024-05-15 20:29:28.365793] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.940 [2024-05-15 20:29:28.366520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.366904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.366917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.940 [2024-05-15 20:29:28.366926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.940 [2024-05-15 20:29:28.367167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.940 [2024-05-15 20:29:28.367399] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.940 [2024-05-15 20:29:28.367407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.940 [2024-05-15 20:29:28.367415] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.940 [2024-05-15 20:29:28.371013] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.940 [2024-05-15 20:29:28.379693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.940 [2024-05-15 20:29:28.380377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.380760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.380772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.940 [2024-05-15 20:29:28.380782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.940 [2024-05-15 20:29:28.381022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.940 [2024-05-15 20:29:28.381247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.940 [2024-05-15 20:29:28.381255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.940 [2024-05-15 20:29:28.381262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.940 [2024-05-15 20:29:28.384862] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.940 [2024-05-15 20:29:28.393545] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.940 [2024-05-15 20:29:28.394273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.394736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.394750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.940 [2024-05-15 20:29:28.394759] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.940 [2024-05-15 20:29:28.394999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.940 [2024-05-15 20:29:28.395224] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.940 [2024-05-15 20:29:28.395236] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.940 [2024-05-15 20:29:28.395243] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.940 [2024-05-15 20:29:28.398841] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.940 [2024-05-15 20:29:28.407527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.940 [2024-05-15 20:29:28.408062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.408451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.408462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.940 [2024-05-15 20:29:28.408470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.940 [2024-05-15 20:29:28.408691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.940 [2024-05-15 20:29:28.408912] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.940 [2024-05-15 20:29:28.408919] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.940 [2024-05-15 20:29:28.408926] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.940 [2024-05-15 20:29:28.412516] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.940 [2024-05-15 20:29:28.421405] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.940 [2024-05-15 20:29:28.422035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.422396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.422407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.940 [2024-05-15 20:29:28.422414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.940 [2024-05-15 20:29:28.422635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.940 [2024-05-15 20:29:28.422855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.940 [2024-05-15 20:29:28.422862] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.940 [2024-05-15 20:29:28.422869] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:35.940 [2024-05-15 20:29:28.426459] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:35.940 [2024-05-15 20:29:28.435390] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:35.940 [2024-05-15 20:29:28.436093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.436474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:35.940 [2024-05-15 20:29:28.436488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:35.940 [2024-05-15 20:29:28.436497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:35.940 [2024-05-15 20:29:28.436737] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:35.940 [2024-05-15 20:29:28.436962] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:35.940 [2024-05-15 20:29:28.436970] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:35.940 [2024-05-15 20:29:28.436981] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.203 [2024-05-15 20:29:28.440596] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.203 [2024-05-15 20:29:28.449286] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.203 [2024-05-15 20:29:28.449977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.203 [2024-05-15 20:29:28.450370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.203 [2024-05-15 20:29:28.450383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.203 [2024-05-15 20:29:28.450393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.203 [2024-05-15 20:29:28.450633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.203 [2024-05-15 20:29:28.450858] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.203 [2024-05-15 20:29:28.450866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.203 [2024-05-15 20:29:28.450873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.203 [2024-05-15 20:29:28.454464] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.203 [2024-05-15 20:29:28.463140] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.203 [2024-05-15 20:29:28.463832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.203 [2024-05-15 20:29:28.464216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.203 [2024-05-15 20:29:28.464229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.203 [2024-05-15 20:29:28.464238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.203 [2024-05-15 20:29:28.464489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.203 [2024-05-15 20:29:28.464715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.203 [2024-05-15 20:29:28.464722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.203 [2024-05-15 20:29:28.464729] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.203 [2024-05-15 20:29:28.468325] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.203 [2024-05-15 20:29:28.477020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.203 [2024-05-15 20:29:28.477719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.203 [2024-05-15 20:29:28.478017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.203 [2024-05-15 20:29:28.478031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.203 [2024-05-15 20:29:28.478040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.203 [2024-05-15 20:29:28.478281] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.203 [2024-05-15 20:29:28.478512] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.203 [2024-05-15 20:29:28.478521] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.203 [2024-05-15 20:29:28.478529] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.203 [2024-05-15 20:29:28.482125] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.203 [2024-05-15 20:29:28.491017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.203 [2024-05-15 20:29:28.491707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.203 [2024-05-15 20:29:28.492095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.203 [2024-05-15 20:29:28.492108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.203 [2024-05-15 20:29:28.492117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.203 [2024-05-15 20:29:28.492366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.203 [2024-05-15 20:29:28.492592] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.203 [2024-05-15 20:29:28.492600] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.203 [2024-05-15 20:29:28.492607] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.203 [2024-05-15 20:29:28.496204] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.203 [2024-05-15 20:29:28.504891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.203 [2024-05-15 20:29:28.505594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.203 [2024-05-15 20:29:28.505985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.203 [2024-05-15 20:29:28.505997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.203 [2024-05-15 20:29:28.506006] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.203 [2024-05-15 20:29:28.506247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.203 [2024-05-15 20:29:28.506481] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.203 [2024-05-15 20:29:28.506490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.203 [2024-05-15 20:29:28.506497] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.203 [2024-05-15 20:29:28.510092] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.203 [2024-05-15 20:29:28.518774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.203 [2024-05-15 20:29:28.519411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.203 [2024-05-15 20:29:28.519809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.204 [2024-05-15 20:29:28.519818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.204 [2024-05-15 20:29:28.519826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.204 [2024-05-15 20:29:28.520048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.204 [2024-05-15 20:29:28.520269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.204 [2024-05-15 20:29:28.520276] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.204 [2024-05-15 20:29:28.520283] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.204 [2024-05-15 20:29:28.523876] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.204 [2024-05-15 20:29:28.532773] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.204 [2024-05-15 20:29:28.533422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.204 [2024-05-15 20:29:28.533902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.204 [2024-05-15 20:29:28.533915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.204 [2024-05-15 20:29:28.533925] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.204 [2024-05-15 20:29:28.534165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.204 [2024-05-15 20:29:28.534398] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.204 [2024-05-15 20:29:28.534407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.204 [2024-05-15 20:29:28.534414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.204 [2024-05-15 20:29:28.538013] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.204 [2024-05-15 20:29:28.546725] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.204 [2024-05-15 20:29:28.547399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.204 [2024-05-15 20:29:28.547769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.204 [2024-05-15 20:29:28.547782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.204 [2024-05-15 20:29:28.547791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.204 [2024-05-15 20:29:28.548032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.204 [2024-05-15 20:29:28.548256] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.204 [2024-05-15 20:29:28.548264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.204 [2024-05-15 20:29:28.548272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.204 [2024-05-15 20:29:28.551871] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.204 [2024-05-15 20:29:28.560768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.204 [2024-05-15 20:29:28.561396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.204 [2024-05-15 20:29:28.561835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.204 [2024-05-15 20:29:28.561848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.204 [2024-05-15 20:29:28.561857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.204 [2024-05-15 20:29:28.562098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.204 [2024-05-15 20:29:28.562330] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.204 [2024-05-15 20:29:28.562339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.204 [2024-05-15 20:29:28.562346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.204 [2024-05-15 20:29:28.565941] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.204 [2024-05-15 20:29:28.574626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.204 [2024-05-15 20:29:28.575357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.204 [2024-05-15 20:29:28.575762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.204 [2024-05-15 20:29:28.575774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.204 [2024-05-15 20:29:28.575784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.204 [2024-05-15 20:29:28.576024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.204 [2024-05-15 20:29:28.576249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.204 [2024-05-15 20:29:28.576257] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.204 [2024-05-15 20:29:28.576265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.204 [2024-05-15 20:29:28.579864] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.204 [2024-05-15 20:29:28.588554] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.204 [2024-05-15 20:29:28.589287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.204 [2024-05-15 20:29:28.589635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.204 [2024-05-15 20:29:28.589648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.204 [2024-05-15 20:29:28.589657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.204 [2024-05-15 20:29:28.589898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.204 [2024-05-15 20:29:28.590123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.204 [2024-05-15 20:29:28.590131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.204 [2024-05-15 20:29:28.590138] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.204 [2024-05-15 20:29:28.593734] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.204 [2024-05-15 20:29:28.602432] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.204 [2024-05-15 20:29:28.603143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.204 [2024-05-15 20:29:28.603529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.204 [2024-05-15 20:29:28.603543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.204 [2024-05-15 20:29:28.603553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.204 [2024-05-15 20:29:28.603794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.204 [2024-05-15 20:29:28.604018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.204 [2024-05-15 20:29:28.604026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.204 [2024-05-15 20:29:28.604035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.204 [2024-05-15 20:29:28.607823] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.204 [2024-05-15 20:29:28.616325] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.204 [2024-05-15 20:29:28.616986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.204 [2024-05-15 20:29:28.617370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.204 [2024-05-15 20:29:28.617388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.204 [2024-05-15 20:29:28.617398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.204 [2024-05-15 20:29:28.617638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.204 [2024-05-15 20:29:28.617863] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.204 [2024-05-15 20:29:28.617871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.205 [2024-05-15 20:29:28.617878] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.205 [2024-05-15 20:29:28.621479] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.205 [2024-05-15 20:29:28.630168] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.205 [2024-05-15 20:29:28.630781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.205 [2024-05-15 20:29:28.631165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.205 [2024-05-15 20:29:28.631178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.205 [2024-05-15 20:29:28.631187] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.205 [2024-05-15 20:29:28.631439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.205 [2024-05-15 20:29:28.631665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.205 [2024-05-15 20:29:28.631672] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.205 [2024-05-15 20:29:28.631680] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.205 [2024-05-15 20:29:28.635277] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.205 [2024-05-15 20:29:28.644205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.205 [2024-05-15 20:29:28.644847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.205 [2024-05-15 20:29:28.645231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.205 [2024-05-15 20:29:28.645243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.205 [2024-05-15 20:29:28.645252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.205 [2024-05-15 20:29:28.645503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.205 [2024-05-15 20:29:28.645728] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.205 [2024-05-15 20:29:28.645736] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.205 [2024-05-15 20:29:28.645743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.205 [2024-05-15 20:29:28.649342] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.205 [2024-05-15 20:29:28.658247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.205 [2024-05-15 20:29:28.658700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.205 [2024-05-15 20:29:28.658975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.205 [2024-05-15 20:29:28.658986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.205 [2024-05-15 20:29:28.658998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.205 [2024-05-15 20:29:28.659222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.205 [2024-05-15 20:29:28.659450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.205 [2024-05-15 20:29:28.659458] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.205 [2024-05-15 20:29:28.659465] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.205 [2024-05-15 20:29:28.663057] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.205 [2024-05-15 20:29:28.672182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.205 [2024-05-15 20:29:28.672680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.205 [2024-05-15 20:29:28.673086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.205 [2024-05-15 20:29:28.673095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.205 [2024-05-15 20:29:28.673102] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.205 [2024-05-15 20:29:28.673331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.205 [2024-05-15 20:29:28.673553] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.205 [2024-05-15 20:29:28.673560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.205 [2024-05-15 20:29:28.673567] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.205 [2024-05-15 20:29:28.677160] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.205 [2024-05-15 20:29:28.686067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.205 [2024-05-15 20:29:28.686685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.205 [2024-05-15 20:29:28.687069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.205 [2024-05-15 20:29:28.687081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.205 [2024-05-15 20:29:28.687090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.205 [2024-05-15 20:29:28.687339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.205 [2024-05-15 20:29:28.687565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.205 [2024-05-15 20:29:28.687573] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.205 [2024-05-15 20:29:28.687580] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.205 [2024-05-15 20:29:28.691208] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 317246 Killed "${NVMF_APP[@]}" "$@" 00:37:36.205 20:29:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:36.205 20:29:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:36.205 20:29:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:36.205 20:29:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:36.205 20:29:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:36.205 [2024-05-15 20:29:28.700121] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.205 [2024-05-15 20:29:28.700774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.205 [2024-05-15 20:29:28.701165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.205 [2024-05-15 20:29:28.701178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.205 [2024-05-15 20:29:28.701188] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.205 [2024-05-15 20:29:28.701438] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.205 [2024-05-15 20:29:28.701664] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.205 [2024-05-15 20:29:28.701672] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.205 [2024-05-15 20:29:28.701679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.205 20:29:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=318652 00:37:36.205 20:29:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 318652 00:37:36.205 20:29:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:36.468 20:29:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 318652 ']' 00:37:36.468 20:29:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:36.468 20:29:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:36.468 20:29:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:36.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:36.468 20:29:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:36.468 20:29:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:36.468 [2024-05-15 20:29:28.705277] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.468 [2024-05-15 20:29:28.713984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.468 [2024-05-15 20:29:28.714488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.468 [2024-05-15 20:29:28.714872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.468 [2024-05-15 20:29:28.714883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.468 [2024-05-15 20:29:28.714890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.468 [2024-05-15 20:29:28.715113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.468 [2024-05-15 20:29:28.715340] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.468 [2024-05-15 20:29:28.715348] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.468 [2024-05-15 20:29:28.715355] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.468 [2024-05-15 20:29:28.718955] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.468 [2024-05-15 20:29:28.727868] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.468 [2024-05-15 20:29:28.728370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.468 [2024-05-15 20:29:28.728723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.468 [2024-05-15 20:29:28.728734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.468 [2024-05-15 20:29:28.728746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.468 [2024-05-15 20:29:28.728968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.468 [2024-05-15 20:29:28.729189] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.468 [2024-05-15 20:29:28.729197] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.468 [2024-05-15 20:29:28.729203] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.468 [2024-05-15 20:29:28.732807] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.468 [2024-05-15 20:29:28.741727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.468 [2024-05-15 20:29:28.742416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.468 [2024-05-15 20:29:28.742759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.468 [2024-05-15 20:29:28.742772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.468 [2024-05-15 20:29:28.742781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.468 [2024-05-15 20:29:28.743022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.468 [2024-05-15 20:29:28.743247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.468 [2024-05-15 20:29:28.743255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.469 [2024-05-15 20:29:28.743263] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.469 [2024-05-15 20:29:28.746866] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.469 [2024-05-15 20:29:28.749300] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:37:36.469 [2024-05-15 20:29:28.749351] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:36.469 [2024-05-15 20:29:28.755767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.469 [2024-05-15 20:29:28.756355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.756760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.756771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.469 [2024-05-15 20:29:28.756779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.469 [2024-05-15 20:29:28.757001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.469 [2024-05-15 20:29:28.757223] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.469 [2024-05-15 20:29:28.757231] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.469 [2024-05-15 20:29:28.757238] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.469 [2024-05-15 20:29:28.760833] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.469 [2024-05-15 20:29:28.769724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.469 [2024-05-15 20:29:28.770532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.770896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.770910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.469 [2024-05-15 20:29:28.770920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.469 [2024-05-15 20:29:28.771161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.469 [2024-05-15 20:29:28.771392] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.469 [2024-05-15 20:29:28.771400] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.469 [2024-05-15 20:29:28.771408] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.469 [2024-05-15 20:29:28.775006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.469 [2024-05-15 20:29:28.783697] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.469 [2024-05-15 20:29:28.784186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 EAL: No free 2048 kB hugepages reported on node 1 00:37:36.469 [2024-05-15 20:29:28.784650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.784660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.469 [2024-05-15 20:29:28.784668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.469 [2024-05-15 20:29:28.784890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.469 [2024-05-15 20:29:28.785111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.469 [2024-05-15 20:29:28.785119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.469 [2024-05-15 20:29:28.785126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.469 [2024-05-15 20:29:28.788727] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.469 [2024-05-15 20:29:28.797621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.469 [2024-05-15 20:29:28.798211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.798544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.798554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.469 [2024-05-15 20:29:28.798562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.469 [2024-05-15 20:29:28.798783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.469 [2024-05-15 20:29:28.799004] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.469 [2024-05-15 20:29:28.799012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.469 [2024-05-15 20:29:28.799018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.469 [2024-05-15 20:29:28.802612] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.469 [2024-05-15 20:29:28.811508] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.469 [2024-05-15 20:29:28.812148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.812570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.812588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.469 [2024-05-15 20:29:28.812596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.469 [2024-05-15 20:29:28.812818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.469 [2024-05-15 20:29:28.813039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.469 [2024-05-15 20:29:28.813046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.469 [2024-05-15 20:29:28.813054] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.469 [2024-05-15 20:29:28.816650] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.469 [2024-05-15 20:29:28.821155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:36.469 [2024-05-15 20:29:28.825556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.469 [2024-05-15 20:29:28.826031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.826250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.826259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.469 [2024-05-15 20:29:28.826266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.469 [2024-05-15 20:29:28.826496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.469 [2024-05-15 20:29:28.826718] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.469 [2024-05-15 20:29:28.826725] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.469 [2024-05-15 20:29:28.826732] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.469 [2024-05-15 20:29:28.830323] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.469 [2024-05-15 20:29:28.839444] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.469 [2024-05-15 20:29:28.840009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.840384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.840395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.469 [2024-05-15 20:29:28.840402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.469 [2024-05-15 20:29:28.840624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.469 [2024-05-15 20:29:28.840845] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.469 [2024-05-15 20:29:28.840853] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.469 [2024-05-15 20:29:28.840860] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.469 [2024-05-15 20:29:28.844452] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.469 [2024-05-15 20:29:28.853355] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.469 [2024-05-15 20:29:28.853995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.854278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.854287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.469 [2024-05-15 20:29:28.854299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.469 [2024-05-15 20:29:28.854526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.469 [2024-05-15 20:29:28.854749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.469 [2024-05-15 20:29:28.854757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.469 [2024-05-15 20:29:28.854764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.469 [2024-05-15 20:29:28.858358] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.469 [2024-05-15 20:29:28.867252] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.469 [2024-05-15 20:29:28.867901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.868292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.469 [2024-05-15 20:29:28.868302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.469 [2024-05-15 20:29:28.868310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.469 [2024-05-15 20:29:28.868536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.469 [2024-05-15 20:29:28.868757] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.469 [2024-05-15 20:29:28.868765] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.469 [2024-05-15 20:29:28.868772] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.470 [2024-05-15 20:29:28.872362] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.470 [2024-05-15 20:29:28.881255] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.470 [2024-05-15 20:29:28.881854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.470 [2024-05-15 20:29:28.882258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.470 [2024-05-15 20:29:28.882267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.470 [2024-05-15 20:29:28.882274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.470 [2024-05-15 20:29:28.882500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.470 [2024-05-15 20:29:28.882721] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.470 [2024-05-15 20:29:28.882729] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.470 [2024-05-15 20:29:28.882736] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.470 [2024-05-15 20:29:28.885158] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:36.470 [2024-05-15 20:29:28.885187] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:36.470 [2024-05-15 20:29:28.885194] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:36.470 [2024-05-15 20:29:28.885200] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:36.470 [2024-05-15 20:29:28.885206] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:36.470 [2024-05-15 20:29:28.885308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:36.470 [2024-05-15 20:29:28.885466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:36.470 [2024-05-15 20:29:28.885598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:36.470 [2024-05-15 20:29:28.886445] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.470 [2024-05-15 20:29:28.895140] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.470 [2024-05-15 20:29:28.895865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.470 [2024-05-15 20:29:28.896163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.470 [2024-05-15 20:29:28.896176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.470 [2024-05-15 20:29:28.896186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.470 [2024-05-15 20:29:28.896442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.470 [2024-05-15 20:29:28.896668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.470 [2024-05-15 20:29:28.896676] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.470 [2024-05-15 20:29:28.896684] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.470 [2024-05-15 20:29:28.900281] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.470 [2024-05-15 20:29:28.909191] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.470 [2024-05-15 20:29:28.909846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.470 [2024-05-15 20:29:28.910247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.470 [2024-05-15 20:29:28.910258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.470 [2024-05-15 20:29:28.910266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.470 [2024-05-15 20:29:28.910494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.470 [2024-05-15 20:29:28.910716] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.470 [2024-05-15 20:29:28.910724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.470 [2024-05-15 20:29:28.910731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.470 [2024-05-15 20:29:28.914327] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.470 [2024-05-15 20:29:28.923225] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.470 [2024-05-15 20:29:28.923777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.470 [2024-05-15 20:29:28.923994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.470 [2024-05-15 20:29:28.924004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.470 [2024-05-15 20:29:28.924012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.470 [2024-05-15 20:29:28.924234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.470 [2024-05-15 20:29:28.924460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.470 [2024-05-15 20:29:28.924468] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.470 [2024-05-15 20:29:28.924475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.470 [2024-05-15 20:29:28.928070] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.470 [2024-05-15 20:29:28.937179] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.470 [2024-05-15 20:29:28.937877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.470 [2024-05-15 20:29:28.938275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.470 [2024-05-15 20:29:28.938288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.470 [2024-05-15 20:29:28.938298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.470 [2024-05-15 20:29:28.938552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.470 [2024-05-15 20:29:28.938778] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.470 [2024-05-15 20:29:28.938787] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.470 [2024-05-15 20:29:28.938795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.470 [2024-05-15 20:29:28.942410] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.470 [2024-05-15 20:29:28.951100] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.470 [2024-05-15 20:29:28.951840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.470 [2024-05-15 20:29:28.952236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.470 [2024-05-15 20:29:28.952249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.470 [2024-05-15 20:29:28.952258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.470 [2024-05-15 20:29:28.952508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.470 [2024-05-15 20:29:28.952734] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.470 [2024-05-15 20:29:28.952743] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.470 [2024-05-15 20:29:28.952750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.470 [2024-05-15 20:29:28.956347] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.470 [2024-05-15 20:29:28.965034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.470 [2024-05-15 20:29:28.965568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.470 [2024-05-15 20:29:28.965971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.470 [2024-05-15 20:29:28.965984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.470 [2024-05-15 20:29:28.965993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.470 [2024-05-15 20:29:28.966234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.470 [2024-05-15 20:29:28.966467] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.470 [2024-05-15 20:29:28.966475] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.470 [2024-05-15 20:29:28.966483] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.732 [2024-05-15 20:29:28.970080] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.732 20:29:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:36.732 20:29:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:37:36.732 20:29:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:36.732 20:29:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:36.732 20:29:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:36.732 [2024-05-15 20:29:28.978991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.732 [2024-05-15 20:29:28.979608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.732 [2024-05-15 20:29:28.979981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.732 [2024-05-15 20:29:28.979992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.732 [2024-05-15 20:29:28.980001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.732 [2024-05-15 20:29:28.980224] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.732 [2024-05-15 20:29:28.980450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.732 [2024-05-15 20:29:28.980458] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.732 [2024-05-15 20:29:28.980465] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.732 [2024-05-15 20:29:28.984055] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.732 [2024-05-15 20:29:28.992953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.732 [2024-05-15 20:29:28.993627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.732 [2024-05-15 20:29:28.994019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.732 [2024-05-15 20:29:28.994032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.732 [2024-05-15 20:29:28.994041] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.732 [2024-05-15 20:29:28.994283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.732 [2024-05-15 20:29:28.994514] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.732 [2024-05-15 20:29:28.994523] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.732 [2024-05-15 20:29:28.994530] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.732 [2024-05-15 20:29:28.998127] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.732 [2024-05-15 20:29:29.006813] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.732 [2024-05-15 20:29:29.007573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.732 [2024-05-15 20:29:29.008052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.732 [2024-05-15 20:29:29.008065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.732 [2024-05-15 20:29:29.008075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.732 [2024-05-15 20:29:29.008325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.732 [2024-05-15 20:29:29.008551] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.732 [2024-05-15 20:29:29.008559] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.732 [2024-05-15 20:29:29.008571] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.732 20:29:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:36.732 20:29:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:36.732 [2024-05-15 20:29:29.012169] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.732 20:29:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.732 20:29:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:36.732 [2024-05-15 20:29:29.019348] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:36.732 [2024-05-15 20:29:29.020858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.732 [2024-05-15 20:29:29.021307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.732 [2024-05-15 20:29:29.021777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.732 [2024-05-15 20:29:29.021792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.732 [2024-05-15 20:29:29.021802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.732 [2024-05-15 20:29:29.022042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.732 [2024-05-15 20:29:29.022267] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.732 [2024-05-15 20:29:29.022276] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.732 [2024-05-15 20:29:29.022283] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.732 20:29:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.732 20:29:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:36.732 20:29:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.732 20:29:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:36.732 [2024-05-15 20:29:29.025886] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.732 [2024-05-15 20:29:29.034785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.732 [2024-05-15 20:29:29.035548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.732 [2024-05-15 20:29:29.035938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.732 [2024-05-15 20:29:29.035951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.732 [2024-05-15 20:29:29.035960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.732 [2024-05-15 20:29:29.036201] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.732 [2024-05-15 20:29:29.036432] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.732 [2024-05-15 20:29:29.036440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.732 [2024-05-15 20:29:29.036448] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.732 [2024-05-15 20:29:29.040057] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.732 [2024-05-15 20:29:29.048746] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.732 [2024-05-15 20:29:29.049319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.732 [2024-05-15 20:29:29.049741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.732 [2024-05-15 20:29:29.049758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.732 [2024-05-15 20:29:29.049768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.732 [2024-05-15 20:29:29.050009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.732 [2024-05-15 20:29:29.050234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.732 [2024-05-15 20:29:29.050242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.733 [2024-05-15 20:29:29.050249] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.733 [2024-05-15 20:29:29.053846] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.733 Malloc0 00:37:36.733 20:29:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.733 20:29:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:36.733 20:29:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.733 20:29:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:36.733 [2024-05-15 20:29:29.062750] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.733 [2024-05-15 20:29:29.063364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.733 [2024-05-15 20:29:29.063765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.733 [2024-05-15 20:29:29.063775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.733 [2024-05-15 20:29:29.063783] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.733 [2024-05-15 20:29:29.064005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.733 [2024-05-15 20:29:29.064227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.733 [2024-05-15 20:29:29.064234] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.733 [2024-05-15 20:29:29.064241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.733 [2024-05-15 20:29:29.067838] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.733 20:29:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.733 20:29:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:36.733 20:29:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.733 20:29:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:36.733 [2024-05-15 20:29:29.076734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.733 [2024-05-15 20:29:29.077416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.733 [2024-05-15 20:29:29.077686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:36.733 [2024-05-15 20:29:29.077700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a1080 with addr=10.0.0.2, port=4420 00:37:36.733 [2024-05-15 20:29:29.077710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a1080 is same with the state(5) to be set 00:37:36.733 [2024-05-15 20:29:29.077950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a1080 (9): Bad file descriptor 00:37:36.733 [2024-05-15 20:29:29.078175] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:36.733 [2024-05-15 20:29:29.078184] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:36.733 [2024-05-15 20:29:29.078195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:36.733 [2024-05-15 20:29:29.081797] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.733 20:29:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.733 20:29:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:36.733 20:29:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.733 20:29:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:36.733 [2024-05-15 20:29:29.090471] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:36.733 [2024-05-15 20:29:29.090688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:36.733 [2024-05-15 20:29:29.090696] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:36.733 20:29:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.733 20:29:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 317603 00:37:36.993 [2024-05-15 20:29:29.258172] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:46.989 00:37:46.989 Latency(us) 00:37:46.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.989 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:46.989 Verification LBA range: start 0x0 length 0x4000 00:37:46.989 Nvme1n1 : 15.00 6901.33 26.96 8299.74 0.00 8393.39 1085.44 14964.05 00:37:46.989 =================================================================================================================== 00:37:46.989 Total : 6901.33 26.96 8299.74 0.00 8393.39 1085.44 14964.05 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:46.989 rmmod nvme_tcp 00:37:46.989 rmmod nvme_fabrics 00:37:46.989 rmmod nvme_keyring 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 318652 ']' 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 318652 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 318652 ']' 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 318652 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 318652 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 318652' 00:37:46.989 killing process with pid 318652 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 318652 00:37:46.989 [2024-05-15 20:29:38.377643] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 318652 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:46.989 20:29:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:48.373 20:29:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:48.373 00:37:48.373 real 0m28.038s 00:37:48.373 user 1m1.328s 00:37:48.373 sys 0m7.594s 00:37:48.373 20:29:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:48.373 20:29:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:48.373 ************************************ 00:37:48.373 END TEST nvmf_bdevperf 00:37:48.373 ************************************ 00:37:48.373 20:29:40 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:48.373 20:29:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:37:48.373 20:29:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:48.373 20:29:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:48.373 ************************************ 00:37:48.373 START TEST nvmf_target_disconnect 00:37:48.373 ************************************ 00:37:48.373 20:29:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:48.373 * Looking for test storage... 00:37:48.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:48.373 20:29:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:48.373 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:48.373 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:48.373 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:48.373 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:37:48.374 20:29:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:56.512 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:56.512 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:56.512 Found net devices under 0000:31:00.0: cvl_0_0 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:56.512 Found net devices under 0000:31:00.1: cvl_0_1 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:56.512 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:56.513 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:56.513 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:56.513 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:56.513 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:56.513 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:56.513 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:56.513 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:56.513 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:56.513 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:56.513 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:56.513 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:56.513 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:56.513 20:29:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:56.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:56.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:37:56.774 00:37:56.774 --- 10.0.0.2 ping statistics --- 00:37:56.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.774 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:56.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:56.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:37:56.774 00:37:56.774 --- 10.0.0.1 ping statistics --- 00:37:56.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.774 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:56.774 ************************************ 00:37:56.774 START TEST nvmf_target_disconnect_tc1 00:37:56.774 ************************************ 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:56.774 EAL: No free 2048 kB hugepages reported on node 1 00:37:56.774 [2024-05-15 20:29:49.257069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.774 [2024-05-15 20:29:49.257533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:56.774 [2024-05-15 20:29:49.257549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x829520 with addr=10.0.0.2, port=4420 00:37:56.774 [2024-05-15 20:29:49.257579] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:56.774 [2024-05-15 20:29:49.257595] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:56.774 [2024-05-15 20:29:49.257603] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:37:56.774 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:56.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:56.774 Initializing NVMe Controllers 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:56.774 00:37:56.774 real 0m0.135s 00:37:56.774 user 0m0.056s 00:37:56.774 sys 0m0.078s 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:56.774 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:56.774 ************************************ 00:37:56.774 END TEST nvmf_target_disconnect_tc1 00:37:56.774 ************************************ 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:57.035 ************************************ 00:37:57.035 START TEST nvmf_target_disconnect_tc2 00:37:57.035 ************************************ 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=325329 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 325329 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 325329 ']' 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:57.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:57.035 20:29:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:57.035 [2024-05-15 20:29:49.420104] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:37:57.035 [2024-05-15 20:29:49.420160] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:57.035 EAL: No free 2048 kB hugepages reported on node 1 00:37:57.035 [2024-05-15 20:29:49.512559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:57.295 [2024-05-15 20:29:49.607663] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:57.295 [2024-05-15 20:29:49.607721] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:57.295 [2024-05-15 20:29:49.607729] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:57.295 [2024-05-15 20:29:49.607740] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:57.295 [2024-05-15 20:29:49.607746] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:57.295 [2024-05-15 20:29:49.607907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:37:57.295 [2024-05-15 20:29:49.608063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:37:57.295 [2024-05-15 20:29:49.608224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:37:57.295 [2024-05-15 20:29:49.608224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:37:57.866 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:57.866 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:37:57.866 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:57.866 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:57.866 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:57.866 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:57.866 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:57.866 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:57.866 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:58.126 Malloc0 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:58.126 [2024-05-15 20:29:50.394905] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:58.126 [2024-05-15 20:29:50.434972] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:58.126 [2024-05-15 20:29:50.435303] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=325386 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:58.126 20:29:50 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:58.126 EAL: No free 2048 kB hugepages reported on node 1 00:38:00.041 20:29:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 325329 00:38:00.041 20:29:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:00.041 Read completed with error (sct=0, sc=8) 00:38:00.041 starting I/O failed 00:38:00.041 Read completed with error (sct=0, sc=8) 00:38:00.041 starting I/O failed 00:38:00.041 Read completed with error (sct=0, sc=8) 00:38:00.041 starting I/O failed 00:38:00.041 Read completed with error (sct=0, sc=8) 00:38:00.041 starting I/O failed 00:38:00.041 Read completed with error (sct=0, sc=8) 00:38:00.041 starting I/O failed 00:38:00.041 Read completed with error (sct=0, sc=8) 00:38:00.041 starting I/O failed 00:38:00.041 Read completed with error (sct=0, sc=8) 00:38:00.041 starting I/O failed 00:38:00.041 Read completed with error (sct=0, sc=8) 00:38:00.041 starting I/O failed 00:38:00.041 Read completed with error (sct=0, sc=8) 00:38:00.041 starting I/O failed 00:38:00.042 Read completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Read completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Read completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Read completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Read completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Read completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Write completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Write completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Read completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Read completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Read completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Write completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Read completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Read completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Write completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Read completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Read completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Write completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Write completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Read completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Read completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Write completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 Write completed with error (sct=0, sc=8) 00:38:00.042 starting I/O failed 00:38:00.042 [2024-05-15 20:29:52.468925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:00.042 [2024-05-15 20:29:52.469573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.469977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.469993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.470543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.470951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.470965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.471320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.471691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.471732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.472084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.472344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.472366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.472704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.473076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.473086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.473458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.473865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.473875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.474169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.474552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.474562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.474825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.475199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.475209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.475597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.475912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.475922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.476286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.476617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.476627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.476952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.477269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.477278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.477713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.478001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.478011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.478372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.478763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.478774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.479180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.479563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.479573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.479934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.480343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.480352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.480751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.481114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.481123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.481510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.481877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.481886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.482148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.482447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.482457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.482829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.483208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.483218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.483623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.484022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.484033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.484429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.484749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.484758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.485116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.485492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.485502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.485905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.486262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.486271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.042 qpair failed and we were unable to recover it. 00:38:00.042 [2024-05-15 20:29:52.486659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.042 [2024-05-15 20:29:52.486985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.486995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.487328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.487605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.487615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.487992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.488360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.488370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.488756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.489069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.489078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.489454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.489856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.489865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.490190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.490554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.490564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.490922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.491291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.491300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.491690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.492095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.492104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.492555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.492972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.492998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.493390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.493711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.493723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.494101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.494487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.494499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.494887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.495209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.495220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.495602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.495976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.495988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.496322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.496609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.496621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.497041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.497407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.497419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.497811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.498060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.498071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.498467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.498800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.498812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.499203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.499548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.499560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.499953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.500207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.500218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.500596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.500955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.500966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.501218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.501634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.501646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.502011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.502373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.502384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.502759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.503171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.503184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.503553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.503907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.503918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.504268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.504667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.504679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.504996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.505390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.505406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.505774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.506144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.506159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.506568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.506904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.506919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.507330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.507704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.507719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.508123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.508494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.043 [2024-05-15 20:29:52.508510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.043 qpair failed and we were unable to recover it. 00:38:00.043 [2024-05-15 20:29:52.508891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.509280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.509295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.509595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.510000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.510016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.510370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.510765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.510780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.511163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.511495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.511511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.511909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.512271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.512286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.512524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.512949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.512965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.513342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.513757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.513772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.514122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.514419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.514435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.514810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.515107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.515121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.515532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.515917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.515932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.516347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.516721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.516740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.517169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.517641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.517662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.518017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.518403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.518423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.518832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.519202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.519221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.519672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.520041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.520061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.520458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.520794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.520813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.521247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.521537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.521557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.521865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.522250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.522270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.522646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.523030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.523049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.523443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.523841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.523865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.524170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.525792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.525835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.526171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.526555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.526584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.526987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.527367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.527394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.527821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.528248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.528275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.528730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.529142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.529168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.529587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.530004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.530031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.530385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.530789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.530816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.531119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.531452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.531480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.531850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.532231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.532257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.044 qpair failed and we were unable to recover it. 00:38:00.044 [2024-05-15 20:29:52.532664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.533044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.044 [2024-05-15 20:29:52.533078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.045 qpair failed and we were unable to recover it. 00:38:00.045 [2024-05-15 20:29:52.533488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.533920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.533948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.045 qpair failed and we were unable to recover it. 00:38:00.045 [2024-05-15 20:29:52.534374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.534821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.534847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.045 qpair failed and we were unable to recover it. 00:38:00.045 [2024-05-15 20:29:52.535182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.535561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.535587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.045 qpair failed and we were unable to recover it. 00:38:00.045 [2024-05-15 20:29:52.535976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.536388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.536416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.045 qpair failed and we were unable to recover it. 00:38:00.045 [2024-05-15 20:29:52.536812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.537102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.537129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.045 qpair failed and we were unable to recover it. 00:38:00.045 [2024-05-15 20:29:52.537536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.537898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.537925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.045 qpair failed and we were unable to recover it. 00:38:00.045 [2024-05-15 20:29:52.538312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.538725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.538751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.045 qpair failed and we were unable to recover it. 00:38:00.045 [2024-05-15 20:29:52.539131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.539588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.539615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.045 qpair failed and we were unable to recover it. 00:38:00.045 [2024-05-15 20:29:52.539920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.540302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.045 [2024-05-15 20:29:52.540347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.045 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.540758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.541163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.541189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.313 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.541599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.542010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.542037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.313 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.542455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.542887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.542914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.313 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.543171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.543582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.543609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.313 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.544042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.544423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.544450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.313 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.544868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.545251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.545277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.313 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.545713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.546084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.546111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.313 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.546517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.546915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.546941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.313 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.547277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.547596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.547624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.313 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.548043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.548427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.548455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.313 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.548903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.549325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.549352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.313 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.549840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.550245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.550271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.313 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.550590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.550973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.550999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.313 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.551403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.551815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.551840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.313 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.552260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.552497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.552524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.313 qpair failed and we were unable to recover it. 00:38:00.313 [2024-05-15 20:29:52.552983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.553366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.313 [2024-05-15 20:29:52.553393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.553807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.554205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.554231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.554683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.555088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.555115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.555483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.555907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.555933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.556352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.556742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.556768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.557182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.557484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.557511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.557933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.558353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.558381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.558774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.559167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.559193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.559654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.560064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.560090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.560586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.560966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.560993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.561412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.561832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.561858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.562204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.562485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.562520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.562950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.563339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.563366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.563699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.564115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.564141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.564597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.565009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.565036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.565419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.565841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.565868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.566254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.566651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.566685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.567075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.567375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.567408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.567745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.568162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.568188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.568650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.569032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.569057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.569445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.569861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.569887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.570178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.570581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.570609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.571053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.571459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.571486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.571802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.572205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.572231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.572685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.573098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.573124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.573571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.573967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.573994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.574441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.574853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.574879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.575350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.575727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.575753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.576079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.576464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.576491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.314 qpair failed and we were unable to recover it. 00:38:00.314 [2024-05-15 20:29:52.577019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.577447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.314 [2024-05-15 20:29:52.577475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.577792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.578228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.578254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.578695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.579047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.579073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.579472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.579881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.579907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.580205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.580510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.580538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.580921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.581326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.581353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.581806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.582109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.582134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.582429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.582869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.582895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.583199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.583591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.583618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.584060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.584440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.584468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.584905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.585292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.585325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.585666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.585976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.586002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.586395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.586706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.586733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.587139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.587518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.587545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.587859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.588273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.588299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.588713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.589123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.589149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.589467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.589746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.589772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.590062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.590455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.590482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.590910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.591370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.591397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.591775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.592189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.592215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.592630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.593043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.593069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.593419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.593832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.593857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.594242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.594693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.594720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.595016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.595457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.595484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.595899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.596284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.596310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.596763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.597123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.597149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.597462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.597858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.597885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.598304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.598756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.598783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.599184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.599491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.599519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.599887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.600216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.315 [2024-05-15 20:29:52.600243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.315 qpair failed and we were unable to recover it. 00:38:00.315 [2024-05-15 20:29:52.600717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.601141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.601167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.601621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.602002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.602029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.602426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.602816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.602842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.603264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.603659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.603687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.604121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.604487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.604513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.604934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.605321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.605349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.605833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.606235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.606267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.606681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.606969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.607001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.607370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.607679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.607713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.608107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.608495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.608523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.608931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.609267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.609294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.609752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.610033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.610060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.610457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.610756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.610782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.611170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.611470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.611497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.611910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.612382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.612408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.612837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.613221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.613247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.613679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.614002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.614029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.614460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.614871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.614898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.615245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.615625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.615652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.615861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.616274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.616300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.616745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.617128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.617154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.617555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.617979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.618005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.618433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.618849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.618876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.619303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.619772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.619799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.620199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.620605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.620633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.620927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.621375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.621402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.621837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.622219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.622245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.622663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.622987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.623013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.623409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.623813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.623839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.624159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.624558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.316 [2024-05-15 20:29:52.624586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.316 qpair failed and we were unable to recover it. 00:38:00.316 [2024-05-15 20:29:52.624992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.625377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.625404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.625691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.626096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.626123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.626560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.626939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.626965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.627373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.627765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.627791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.628188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.628531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.628559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.628982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.629365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.629393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.629736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.630158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.630185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.630579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.630959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.630985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.631391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.631797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.631824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.632212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.632627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.632655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.633065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.633451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.633479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.633790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.634200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.634227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.634638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.635042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.635068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.635463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.635845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.635872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.636298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.636724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.636751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.637163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.637448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.637476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.637905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.638285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.638311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.638712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.639152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.639179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.639579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.640001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.640028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.640209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.640644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.640678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.641073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.641482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.641511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.641816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.642223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.642249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.642688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.643004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.643030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.643355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.643783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.643809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.317 [2024-05-15 20:29:52.644225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.644630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.317 [2024-05-15 20:29:52.644657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.317 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.645045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.645422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.645450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.645835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.646242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.646269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.646666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.647069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.647095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.647492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.647903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.647930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.648339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.648764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.648802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.649263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.649568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.649596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.650027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.650437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.650465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.650893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.651276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.651302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.651690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.652075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.652102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.652527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.652944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.652970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.653378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.653801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.653827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.654176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.654590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.654618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.655044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.655431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.655459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.655885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.656292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.656327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.656730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.657128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.657155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.657569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.657963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.657989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.658280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.658719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.658746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.659162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.659453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.659485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.659924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.660333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.660361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.660771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.661172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.661198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.661670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.662045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.662072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.662502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.662909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.662936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.663312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.663723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.663750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.664175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.664596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.664623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.665022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.665442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.665469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.665883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.666267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.666294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.666662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.667064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.667091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.667502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.667884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.667911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.668219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.668614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.318 [2024-05-15 20:29:52.668642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.318 qpair failed and we were unable to recover it. 00:38:00.318 [2024-05-15 20:29:52.669057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.669453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.669480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.669861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.670276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.670302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.670720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.671019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.671046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.671435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.671846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.671874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.672193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.672530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.672557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.672976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.673362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.673390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.673811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.674226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.674254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.674661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.675042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.675068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.675474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.675835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.675862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.676058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.676461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.676489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.676913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.677310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.677348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.677753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.678133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.678159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.678507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.678970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.678997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.679418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.679782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.679808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.680222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.680472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.680500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.680894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.681265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.681292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.681717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.682099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.682132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.682569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.682957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.682984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.683410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.683728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.683754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.684172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.684584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.684612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.684919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.685333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.685360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.685754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.686140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.686167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.686470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.686872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.686898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.687328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.687789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.687816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.688233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.688572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.688600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.688998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.689387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.689414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.689834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.690220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.690247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.690680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.691137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.691164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.691564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.691947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.691973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.692374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.692756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.692782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.319 qpair failed and we were unable to recover it. 00:38:00.319 [2024-05-15 20:29:52.693202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.319 [2024-05-15 20:29:52.693613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.693641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.694038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.694431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.694459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.694868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.695279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.695306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.695706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.696094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.696120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.696525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.697005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.697031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.697464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.697840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.697866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.698160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.698550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.698579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.698889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.699301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.699355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.699760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.700150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.700176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.700618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.700906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.700937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.701340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.701673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.701700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.701992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.702409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.702437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.702847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.703179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.703206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.703580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.704007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.704033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.704459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.704874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.704900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.705299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.705609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.705637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.706068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.706480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.706507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.706857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.707185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.707211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.707511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.707932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.707958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.708266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.708556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.708586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.708874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.709270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.709296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.709658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.709952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.709977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.710382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.710768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.710794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.711264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.711658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.711686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.712095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.712456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.712483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.712900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.713285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.713311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.713715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.714056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.714083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.714511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.714928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.714956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.715376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.715724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.715751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.716136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.716563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.320 [2024-05-15 20:29:52.716590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.320 qpair failed and we were unable to recover it. 00:38:00.320 [2024-05-15 20:29:52.716941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.717346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.717373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.717802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.718225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.718251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.718677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.719099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.719125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.719587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.719972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.719999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.720311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.720653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.720678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.721110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.721498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.721525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.721955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.722342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.722371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.722795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.723215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.723247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.723676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.724029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.724055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.724378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.724799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.724826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.725269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.725626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.725653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.725968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.726272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.726301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.726737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.727145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.727172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.727577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.727963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.727988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.728409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.728808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.728834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.729152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.729556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.729583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.729994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.732061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.732120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.732375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.732792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.732819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.733241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.733589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.733617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.734105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.734523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.734551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.734979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.735248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.735274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.735543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.735905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.735931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.736248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.736647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.736676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.737078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.737438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.737466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.737883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.738299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.738333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.738730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.739045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.739070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.739462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.739859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.739887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.740322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.740730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.740757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.741175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.741637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.741665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.742069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.742455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.321 [2024-05-15 20:29:52.742482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.321 qpair failed and we were unable to recover it. 00:38:00.321 [2024-05-15 20:29:52.742923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.743287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.743321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.743724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.744143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.744170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.744612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.744958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.744985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.745424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.745860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.745886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.746323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.746626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.746656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.747012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.747400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.747427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.747842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.748276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.748302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.748741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.749134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.749161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.749505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.749883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.749909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.750322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.750666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.750693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.751118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.751514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.751540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.751981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.752347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.752374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.752694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.753069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.753095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.753556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.753854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.753884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.754297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.754727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.754754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.755157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.755514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.755541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.755998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.756397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.756424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.756847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.757195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.757222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.757635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.758067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.758099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.758567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.758959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.758985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.759298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.759644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.759671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.760142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.760565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.760593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.760900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.761339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.761368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.761866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.762295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.762330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.762770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.763012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.763038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.763307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.763767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.322 [2024-05-15 20:29:52.763794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.322 qpair failed and we were unable to recover it. 00:38:00.322 [2024-05-15 20:29:52.764230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.764683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.764781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.765283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.765775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.765808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.766117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.766435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.766480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.766921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.767336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.767365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.767794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.768187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.768214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.768520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.768947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.768973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.769414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.769888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.769915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.770335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.770772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.770798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.771163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.771490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.771518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.771935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.772336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.772364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.772862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.773256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.773283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.773712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.774126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.774153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.774406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.774821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.774847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.775211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.775623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.775651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.776076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.776460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.776486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.776809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.777197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.777224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.777638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.778034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.778060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.778386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.778812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.778839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.779265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.779658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.779685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.780088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.780513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.780541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.780960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.781379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.781408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.781825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.782204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.782230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.782683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.783073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.783099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.783549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.784018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.784046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.784498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.784885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.784911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.785217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.785630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.785659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.786085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.786406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.786438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.786728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.787168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.787194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.787605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.787898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.787928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.323 [2024-05-15 20:29:52.788142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.788485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.323 [2024-05-15 20:29:52.788514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.323 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.788934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.789353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.789381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.789807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.790207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.790233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.790626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.791053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.791079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.791505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.791968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.791996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.792323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.792822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.792849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.793294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.793747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.793774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.794179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.794569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.794596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.795038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.795500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.795529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.795959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.796372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.796400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.796806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.797260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.797288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.797719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.798114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.798142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.798554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.799010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.799038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.799463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.799858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.799885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.800328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.800817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.800853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.801274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.801612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.801644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.802065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.802617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.802720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.803227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.803632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.803665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.804092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.804499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.804527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.804958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.805307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.805346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.805784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.806182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.806208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.806656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.807097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.324 [2024-05-15 20:29:52.807124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.324 qpair failed and we were unable to recover it. 00:38:00.324 [2024-05-15 20:29:52.807648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.808165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.808204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.808694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.808975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.809002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.809436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.809809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.809835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.810251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.810652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.810680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.810984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.811215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.811242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.811628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.812033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.812060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.812490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.812765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.812792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.813213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.813718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.813745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.814150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.814553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.814583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.815009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.815457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.815485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.815912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.816341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.816370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.816794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.817232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.817258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.817539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.817786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.817814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.818225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.818639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.818667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.819030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.819422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.819450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.819880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.820283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.820311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.820744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.821164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.821191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.821615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.822021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.822048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.822407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.822841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.822867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.591 qpair failed and we were unable to recover it. 00:38:00.591 [2024-05-15 20:29:52.823354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.823783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.591 [2024-05-15 20:29:52.823809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.824216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.824548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.824576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.825023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.825325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.825354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.825706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.826132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.826158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.826565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.826985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.827011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.827421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.827843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.827869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.828345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.828765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.828791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.829184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.829580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.829608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.830121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.830510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.830537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.830940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.831374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.831402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.831838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.832263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.832290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.832735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.833035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.833063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.833367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.833796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.833823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.834243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.834615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.834642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.835068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.835462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.835491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.835995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.836391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.836420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.836855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.837272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.837299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.837739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.838088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.838114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.838493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.838920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.838946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.839356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.839677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.839703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.840123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.840523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.840550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.840966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.841367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.841397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.841867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.842171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.842198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.842518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.842952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.842978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.843415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.843852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.843885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.844354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.844783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.844810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.845248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.845720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.845748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.846153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.846615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.846644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.847083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.847635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.847738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.848235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.848613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.848643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.849092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.849650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.849753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.850273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.850737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.850766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.851200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.851635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.851666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.851856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.852268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.852294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.852668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.853068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.853094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.853547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.853946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.853974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.854333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.854778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.854805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.855226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.855646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.855675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.856103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.856619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.856723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.857239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.857675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.857704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.858138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.858637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.858741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.859143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.859571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.859602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.859898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.860337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.860365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.860751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.861171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.861198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.861638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.862053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.862080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.862532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.862845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.862871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.863253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.863524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.863553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.863995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.864302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.864339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.864772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.865175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.865201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.865667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.866073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.866101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.592 qpair failed and we were unable to recover it. 00:38:00.592 [2024-05-15 20:29:52.866474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.592 [2024-05-15 20:29:52.866892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.866919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.867339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.867821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.867849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.868274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.868595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.868625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.869039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.869383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.869433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.869880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.870281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.870307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.870721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.871146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.871174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.871496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.871920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.871946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.872367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.872793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.872820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.873146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.873572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.873601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.874012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.874413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.874440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.874888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.875287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.875325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.875773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.876167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.876194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.876568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.877010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.877037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.877373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.877821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.877849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.878287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.878697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.878723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.879141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.879595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.879630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.880049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.880505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.880534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.880981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.881421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.881449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.881885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.882175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.882206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.882528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.882972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.882999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.883425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.883905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.883931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.884123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.884558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.884587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.885015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.885416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.885444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.885824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.886221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.886248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.886623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.887057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.887084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.887505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.887926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.887962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.888388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.888819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.888845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.889270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.889695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.889722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.890037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.890499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.890527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.890986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.891389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.891417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.891843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.892324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.892354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.892793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.893193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.893220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.893627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.894045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.894072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.894486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.894882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.894909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.895340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.895740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.895767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.896278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.896752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.896780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.897201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.897616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.897645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.898068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.898465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.898492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.898932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.899355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.899383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.899827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.900231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.900257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.900697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.901094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.901120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.901552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.901950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.901977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.902407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.902813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.902841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.903246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.903563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.903591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.904042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.904339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.904366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.904765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.905118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.905146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.905584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.906030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.906057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.906405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.906801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.906828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.907245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.907640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.907669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.593 qpair failed and we were unable to recover it. 00:38:00.593 [2024-05-15 20:29:52.908106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.593 [2024-05-15 20:29:52.908536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.908564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.908984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.909398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.909427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.909869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.910295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.910357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.910725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.911141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.911167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.911650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.912050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.912077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.912502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.912900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.912926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.913365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.913793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.913818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.914132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.914570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.914599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.914903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.915357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.915385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.915800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.916216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.916242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.916694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.917174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.917201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.917520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.917947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.917973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.918378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.918770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.918796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.919230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.919635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.919664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.920098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.920521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.920549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.920974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.921445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.921473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.921878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.922289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.922329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.922681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.923109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.923143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.923612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.924033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.924060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.924485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.926701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.926767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.927224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.927664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.927694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.928129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.928606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.928634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.929083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.929618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.929721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.930202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.930641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.930672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.931072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.931498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.931528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.931952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.932351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.932380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.932762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.933187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.933213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.933577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.934009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.934036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.934489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.934910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.934937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.935371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.935811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.935838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.936249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.936693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.936721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.937132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.937577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.937605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.937931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.938375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.938405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.938839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.939235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.939261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.939584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.939989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.940015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.940467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.940876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.940902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.941309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.941746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.941772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.942154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.942565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.942668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.943184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.943530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.943565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.943992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.944389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.944419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.944726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.945150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.945177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.945613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.946011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.946037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.946456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.946878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.946904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.947159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.947632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.947660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.948099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.948500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.948530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.948848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.949224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.594 [2024-05-15 20:29:52.949252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.594 qpair failed and we were unable to recover it. 00:38:00.594 [2024-05-15 20:29:52.949659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.950012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.950038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.950486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.950890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.950918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.951356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.951817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.951844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.952208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.952628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.952660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.953078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.953502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.953530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.953937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.954335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.954362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.954813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.955234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.955258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.955700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.956120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.956146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.956540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.956959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.956984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.957216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.957661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.957687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.958126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.958533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.958561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.958887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.959323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.959352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.959657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.960128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.960155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.960571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.960994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.961022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.961460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.961762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.961789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.962225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.962663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.962691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.963113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.963533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.963562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.963985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.964403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.964431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.964860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.965300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.965336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.965758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.966183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.966210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.966663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.967061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.967088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.967416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.967865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.967891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.968326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.968785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.968819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.969332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.969700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.969727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.970190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.970496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.970524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.970943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.971436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.971463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.971923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.972256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.972282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.972608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.972901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.972928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.973364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.973788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.973815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.974335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.974816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.974842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.975261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.975616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.975645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.975943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.976355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.976383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.976845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.977250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.977276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.977792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.978178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.978204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.978581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.978976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.979002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.979400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.979841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.979867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.980287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.980738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.980766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.981144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.981564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.981591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.982089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.982598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.982702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.983221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.983583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.983614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.984042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.984449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.984477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.984911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.985337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.985366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.985819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.986243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.986269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.986740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.987104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.987131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.987557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.987919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.987945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.988399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.988794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.988820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.989248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.989678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.989706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.990024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.990374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.990401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.990723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.991105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.991131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.991598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.992003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.992030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.992348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.992762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.992788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.993209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.993572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.993599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.994040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.994481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.994508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.994823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.995135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.595 [2024-05-15 20:29:52.995163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.595 qpair failed and we were unable to recover it. 00:38:00.595 [2024-05-15 20:29:52.995683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:52.996096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:52.996123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:52.996469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:52.996820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:52.996847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:52.997263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:52.997597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:52.997624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:52.998059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:52.998417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:52.998444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:52.998872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:52.999276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:52.999302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:52.999737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.000139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.000166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.000642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.001014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.001041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.001454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.001881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.001908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.002378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.002813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.002841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.003257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.003611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.003645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.004096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.004535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.004564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.004891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.005304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.005341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.005767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.006292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.006328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.006794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.007200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.007227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.007736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.008053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.008087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.008477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.008916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.008943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.009397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.009780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.009807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.010231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.010648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.010675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.010996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.011299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.011360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.011838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.012184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.012218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.012560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.012989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.013016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.013504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.013913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.013939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.014266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.014655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.014683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.015114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.015533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.015560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.015971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.016386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.016414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.016852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.017277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.017303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.017688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.018147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.018174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.018627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.019030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.019056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.019480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.019748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.019778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.020192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.020521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.020549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.020857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.021274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.021300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.021734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.022160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.022186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.022667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.023091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.023117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.023602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.024023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.024049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.024505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.024906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.024934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.025363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.025825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.025850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.026175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.026589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.026617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.027048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.027417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.027444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.027786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.028262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.028288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.028721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.029140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.029166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.029483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.029875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.029901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.030329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.030696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.030721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.031163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.031577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.031604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.032037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.032347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.032375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.032803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.033162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.033189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.033627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.034099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.034125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.034541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.034965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.034991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.035422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.035846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.035871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.036287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.036687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.036714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.037140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.037451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.596 [2024-05-15 20:29:53.037479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.596 qpair failed and we were unable to recover it. 00:38:00.596 [2024-05-15 20:29:53.037926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.038360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.038387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.038803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.039221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.039247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.039599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.040018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.040045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.040472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.040743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.040774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.041210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.041612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.041640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.042070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.042544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.042571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.043058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.043444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.043472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.043900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.044334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.044361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.044790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.045186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.045212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.045660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.046054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.046080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.046547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.046921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.046954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.047381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.047849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.047875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.048291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.048771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.048798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.049111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.049549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.049577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.049995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.050391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.050419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.050826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.051213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.051240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.051678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.052097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.052124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.052562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.052976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.053003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.053333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.053766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.053793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.054108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.054706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.054809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.055346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.055719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.055747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.056211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.056630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.056659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.057074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.057639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.057742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.058255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.058696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.058726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.059240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.059492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.059530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.059924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.060329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.060357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.060782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.061082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.061113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.061630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.062035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.062062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.062560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.062994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.063031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.063349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.063756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.063783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.064201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.064641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.064668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.065105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.065519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.065547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.065964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.066270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.066297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.066597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.067028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.067056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.067442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.067894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.067921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.068361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.068772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.068798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.069220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.069500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.069527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.069869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.070230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.070257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.070686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.070986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.071016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.071454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.071863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.071889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.072336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.072747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.072774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.073207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.073504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.073536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.073952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.074350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.074378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.074814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.075215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.075240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.075689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.076008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.076034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.076460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.076903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.076931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.077407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.077859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.077886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.078322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.078732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.078759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.079134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.079530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.079558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.079975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.080332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.080360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.080770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.081182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.081209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.081615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.082055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.082083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.597 [2024-05-15 20:29:53.082518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.082952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.597 [2024-05-15 20:29:53.082978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.597 qpair failed and we were unable to recover it. 00:38:00.598 [2024-05-15 20:29:53.083421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.598 [2024-05-15 20:29:53.083821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.598 [2024-05-15 20:29:53.083847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.598 qpair failed and we were unable to recover it. 00:38:00.598 [2024-05-15 20:29:53.084244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.598 [2024-05-15 20:29:53.084655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.598 [2024-05-15 20:29:53.084682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.598 qpair failed and we were unable to recover it. 00:38:00.598 [2024-05-15 20:29:53.085120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.598 [2024-05-15 20:29:53.085517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.598 [2024-05-15 20:29:53.085545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.598 qpair failed and we were unable to recover it. 00:38:00.598 [2024-05-15 20:29:53.085973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.598 [2024-05-15 20:29:53.086395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.598 [2024-05-15 20:29:53.086423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.598 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.086846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.087274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.087301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.087743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.088143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.088169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.088490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.088880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.088907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.089283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.089676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.089704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.090136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.090534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.090568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.091049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.091457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.091483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.091886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.092286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.092312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.092794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.093194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.093221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.093650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.093958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.093985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.094425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.094924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.094950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.095349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.095777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.095804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.096171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.096569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.096597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.097038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.097399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.097429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.097845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.098271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.098298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.098715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.099113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.099139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.099557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.099929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.099955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.865 qpair failed and we were unable to recover it. 00:38:00.865 [2024-05-15 20:29:53.100379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.865 [2024-05-15 20:29:53.100803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.100829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.101252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.101547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.101577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.101990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.102407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.102435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.102906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.103338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.103365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.103735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.104181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.104208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.104681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.105114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.105141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.105566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.105938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.105965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.106408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.106851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.106877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.107320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.107744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.107770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.108184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.108643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.108671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.109098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.109528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.109557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.109973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.110408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.110436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.110926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.111344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.111373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.111776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.112200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.112226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.112543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.112845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.112872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.113251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.113666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.113695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.113996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.114465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.114493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.114883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.115341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.115369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.115798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.116197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.116223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.116631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.117049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.117077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.117504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.117903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.117930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.118348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.118779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.118805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.119193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.119649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.119676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.120088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.120509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.120536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.120964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.121366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.121396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.121796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.122155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.122182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.122623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.122985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.123010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.123442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.123843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.123869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.124294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.124703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.124730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.125150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.125547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.866 [2024-05-15 20:29:53.125581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.866 qpair failed and we were unable to recover it. 00:38:00.866 [2024-05-15 20:29:53.125889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.126301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.126336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.126766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.127186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.127213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.127620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.128023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.128049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.128361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.128760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.128787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.129099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.129519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.129547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.129864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.130159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.130188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.130479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.130892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.130918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.131250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.131673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.131701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.132012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.132416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.132443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.132883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.133271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.133303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.133615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.133990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.134017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.134427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.134847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.134875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.135307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.135647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.135673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.136089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.136389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.136420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.136847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.137271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.137297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.137742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.138143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.138169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.138497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.138942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.138969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.139367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.139765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.139791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.140231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.140657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.140684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.141097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.141514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.141541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.141972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.142392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.142421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.142857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.143286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.143311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.143730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.144140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.144166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.144570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.144904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.144930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.145363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.145761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.145787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.146201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.146621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.146649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.147077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.147479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.147507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.147947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.148343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.148371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.148809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.149231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.867 [2024-05-15 20:29:53.149257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.867 qpair failed and we were unable to recover it. 00:38:00.867 [2024-05-15 20:29:53.149704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.150063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.150090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.150517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.150916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.150942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.151378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.151777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.151804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.152238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.152637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.152664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.153077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.153494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.153522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.153948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.154365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.154393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.154806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.155200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.155226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.155635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.155989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.156016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.156454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.156880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.156906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.157341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.157686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.157712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.158195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.158607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.158635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.159064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.159482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.159511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.159947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.160345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.160373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.160799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.161187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.161214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.161646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.162074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.162101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.162521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.162923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.162949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.163268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.163675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.163703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.164134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.164530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.164558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.164972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.165322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.165349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.165661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.166015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.166042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.166465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.166824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.166851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.167281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.167652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.167686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.168142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.168437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.168469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.168861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.169272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.169299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.169753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.170154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.170180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.170690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.171083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.171110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.171555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.171976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.172003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.172393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.172827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.172853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.173289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.173677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.173703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.868 [2024-05-15 20:29:53.174135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.174542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.868 [2024-05-15 20:29:53.174570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.868 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.174981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.175287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.175322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.175699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.176120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.176146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.176572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.176989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.177015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.177384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.177823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.177849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.178268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.178666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.178694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.179125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.179551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.179580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.180019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.180442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.180469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.180898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.181292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.181325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.181792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.182194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.182220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.182641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.183045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.183071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.183488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.183822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.183849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.184276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.184666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.184695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.185117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.185521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.185548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.185980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.186403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.186431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.186918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.187331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.187359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.187859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.188250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.188277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.188688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.189032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.189059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.189607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.190135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.190172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.190654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.191113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.191141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.191619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.191973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.192000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.192439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.192874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.192901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.193365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.193686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.193712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.194124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.194519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.194548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.869 [2024-05-15 20:29:53.195011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.195411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.869 [2024-05-15 20:29:53.195440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.869 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.195841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.196239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.196265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.196580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.197010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.197037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.197452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.197885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.197911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.198270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.198681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.198708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.199071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.199486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.199513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.199952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.200335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.200362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.200771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.201238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.201264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.201712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.202110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.202136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.202549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.203010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.203038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.203619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.204192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.204231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.204692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.205150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.205178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.205578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.205975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.206001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.206329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.206736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.206763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.207132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.207530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.207559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.207972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.208335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.208362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.208685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.209134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.209160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.209612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.210011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.210038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.210465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.210904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.210932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.211379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.211779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.211818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.212110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.212517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.212544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.212995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.213348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.213375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.213828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.214227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.214253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.214568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.214989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.215015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.215448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.215807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.215835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.216274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.216707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.216736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.217165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.217484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.217511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.217903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.218333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.218362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.218789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.219071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.219097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.219556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.219985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.870 [2024-05-15 20:29:53.220011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.870 qpair failed and we were unable to recover it. 00:38:00.870 [2024-05-15 20:29:53.220442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.220867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.220894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.221308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.221747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.221774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.222205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.222626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.222653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.222972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.223430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.223458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.223899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.224300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.224333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.224741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.225180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.225206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.225705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.226093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.226120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.226535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.226901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.226928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.227340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.227752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.227778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.228091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.228603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.228707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.229225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.229544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.229573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.229972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.230274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.230302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.230739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.231138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.231165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.231601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.231998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.232025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.232341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.232774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.232801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.233220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.233652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.233679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.234111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.234584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.234612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.235052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.235454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.235483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.235911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.236275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.236302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.236659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.237078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.237105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.237511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.237949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.237976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.238390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.238729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.238756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.239167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.239598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.239625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.240053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.240448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.240475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.240922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.241334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.241362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.241793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.242198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.242224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.242644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.243067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.243094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.243539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.243878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.243904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.244339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.244730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.244756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.245167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.245645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.871 [2024-05-15 20:29:53.245672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.871 qpair failed and we were unable to recover it. 00:38:00.871 [2024-05-15 20:29:53.246104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.246530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.246564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.247001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.247428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.247457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.247844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.248255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.248283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.248781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.249181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.249207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.249615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.250041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.250067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.250485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.250886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.250912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.251344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.251732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.251759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.252201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.252550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.252578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.252894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.253323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.253351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.253785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.254175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.254201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.254502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.254901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.254934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.255358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.255791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.255817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.256256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.256492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.256522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.256954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.257351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.257378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.257703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.257996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.258022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.258438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.258837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.258864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.259291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.259742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.259771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.260195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.260609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.260637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.261012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.261438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.261466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.261896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.262294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.262327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.262777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.263177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.263204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.263617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.264038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.264065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.264500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.264897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.264924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.265332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.265782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.265808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.266242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.266640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.266668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.267080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.267504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.267533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.267921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.268338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.268366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.268826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.269190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.269217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.269655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.270002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.270028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.270463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.270878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.872 [2024-05-15 20:29:53.270904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.872 qpair failed and we were unable to recover it. 00:38:00.872 [2024-05-15 20:29:53.271338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.271756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.271782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.272201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.272624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.272652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.273082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.273462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.273490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.273951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.274378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.274406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.274780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.275234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.275260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.275668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.276064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.276090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.276529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.276967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.276994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.277428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.277847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.277873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.278308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.278754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.278781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.279202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.279614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.279642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.280073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.280476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.280505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.280935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.281301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.281337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.281807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.282201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.282228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.282697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.283116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.283143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.283662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.284237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.284274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.284760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.285178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.285205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.285620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.286016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.286043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.286457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.286884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.286911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.287343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.287736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.287763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.288180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.288481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.288509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.288928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.289343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.289371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.289787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.290200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.290238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.290645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.291044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.291071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.291488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.291887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.291913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.292348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.292512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.292538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.292969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.293240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.293274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.293724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.294073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.294100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.294533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.294930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.294956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.295399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.295799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.295825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.873 [2024-05-15 20:29:53.296238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.296636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.873 [2024-05-15 20:29:53.296663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.873 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.296977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.297393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.297420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.297900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.298292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.298328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.298798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.299104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.299135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.299523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.299956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.299982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.300417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.300874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.300900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.301312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.301746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.301772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.302199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.302615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.302644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.302973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.303402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.303431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.303745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.304035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.304066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.304500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.304900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.304928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.305357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.305753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.305780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.306189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.306611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.306640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.307071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.307505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.307533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.307972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.308373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.308399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.308712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.309163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.309189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.309658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.310127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.310153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.310564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.310961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.310987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.311402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.311847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.311874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.312306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.312746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.312773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.313221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.313652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.313679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.314112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.314506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.314533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.314970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.315391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.315419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.315866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.316266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.316293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.316756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.317155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.317182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.317523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.317946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.317972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.874 qpair failed and we were unable to recover it. 00:38:00.874 [2024-05-15 20:29:53.318385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.874 [2024-05-15 20:29:53.318787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.318815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.319269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.319767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.319795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.320168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.320475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.320524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.320948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.321351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.321378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.321799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.322222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.322249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.322698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.323129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.323156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.323607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.324010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.324036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.324470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.324852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.324880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.325334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.325763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.325789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.326180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.326609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.326637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.327055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.327455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.327483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.327894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.328290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.328325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.328729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.329169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.329196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.329609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.330013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.330041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.330362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.330791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.330817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.331226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.331629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.331656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.332074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.332472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.332499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.333016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.333410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.333443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.333870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.334232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.334259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.334623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.335041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.335068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.335389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.335793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.335819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.336289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.336581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.336608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.336902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.337336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.337365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.337789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.338188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.338214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.338671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.338981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.339007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.339390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.339817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.339843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.340243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.340640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.340667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.340976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.341356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.341384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.341836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.342232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.342258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.875 [2024-05-15 20:29:53.342670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.343070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.875 [2024-05-15 20:29:53.343097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.875 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.343515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.343904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.343931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.344368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.344801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.344828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.345272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.345693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.345720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.346147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.346543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.346572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.346986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.347388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.347415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.347836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.348233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.348259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.348678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.349103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.349130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.349517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.349899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.349925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.350351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.350777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.350803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.351233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.351704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.351732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.352141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.352546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.352573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.352984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.353407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.353436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.353888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.354308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.354347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.354783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.355182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.355208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.355656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.356095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.356121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.356446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.356907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.356933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.357329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.357789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.357814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.358211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.358625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.358653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.358960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.359344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.876 [2024-05-15 20:29:53.359372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:00.876 qpair failed and we were unable to recover it. 00:38:00.876 [2024-05-15 20:29:53.359805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.360245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.360273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.360660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.361086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.361113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.361545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.361851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.361881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.362198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.362582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.362609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.363039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.363470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.363498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.363930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.364335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.364362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.364761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.365189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.365215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.365705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.366126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.366153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.366516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.366935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.366964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.367398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.367825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.367860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.368278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.368685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.368716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.369146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.369569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.369600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.370021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.370441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.370473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.370921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.371343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.371375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.371804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.372226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.372255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.372688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.373106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.373134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.373557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.373921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.373950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.374366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.374796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.374825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.375249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.375669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.375700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.376136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.376558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.376594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.376907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.377339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.377368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.377823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.378244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.378273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.378691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.379079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.379107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.379495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.379918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.379946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.380248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.380684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.380714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.381019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.381461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.381491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.144 qpair failed and we were unable to recover it. 00:38:01.144 [2024-05-15 20:29:53.381913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.144 [2024-05-15 20:29:53.382333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.382363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.382839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.383264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.383292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.383671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.384121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.384150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.384682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.385215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.385256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.385723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.386175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.386205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.386618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.387037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.387066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.387493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.387918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.387947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.388385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.388818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.388848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.389265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.389684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.389716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.390072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.390528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.390557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.390874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.391340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.391371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.391815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.392238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.392266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.392777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.393200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.393230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.393701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.395256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.395326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.395790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.396129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.396173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.396605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.397030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.397059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.397460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.397907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.397936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.398371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.398670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.398703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.399122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.399549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.399579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.400011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.400434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.400463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.400886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.401309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.401352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.402989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.403474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.403510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.403952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.404353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.404386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.404835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.405143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.405170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.405502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.405938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.405966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.406382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.406635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.406662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.407117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.407521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.407550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.407876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.408217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.408244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.408647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.409003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.409028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.145 [2024-05-15 20:29:53.409320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.409737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.145 [2024-05-15 20:29:53.409763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.145 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.410226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.410679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.410711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.411146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.411480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.411509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.411909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.412307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.412343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.412791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.413269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.413295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.413674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.414078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.414112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.414648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.415093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.415132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.415574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.415998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.416028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.416465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.416915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.416943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.417360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.417812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.417840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.418257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.418642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.418670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.419097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.419446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.419475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.419901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.420332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.420359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.420823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.421223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.421250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.421727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.422134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.422160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.422616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.422984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.423010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.423355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.423799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.423827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.424289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.424767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.424807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.425188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.425580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.425609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.426043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.426417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.426446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.426923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.427335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.427363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.427776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.428178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.428204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.428663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.429061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.429087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.429504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.429906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.429933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.430357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.430680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.430706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.431116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.431523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.431551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.431969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.432371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.432398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.146 qpair failed and we were unable to recover it. 00:38:01.146 [2024-05-15 20:29:53.432826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.146 [2024-05-15 20:29:53.433134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.433160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.433564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.433973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.434000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.434295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.434732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.434759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.435178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.435657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.435684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.435946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.436385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.436413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.436837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.437235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.437261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.437617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.438017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.438043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.438465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.438875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.438901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.439328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.439750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.439776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.440199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.440617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.440645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.441078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.441555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.441582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.442020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.442419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.442446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.442874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.443236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.443262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.443743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.444142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.444168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.444614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.445090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.445117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.445549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.445977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.446003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.446441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.446854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.446880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.447296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.447700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.447726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.448155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.448566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.448593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.449008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.449580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.449684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.450197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.450620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.450651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.451077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.451502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.451529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.451902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.452300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.452346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.452800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.453203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.453229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.453638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.454054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.454081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.454527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.454951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.454977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.455408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.455804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.455830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.456263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.456574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.456611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.457059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.457461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.457488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.457833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.458241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.147 [2024-05-15 20:29:53.458281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.147 qpair failed and we were unable to recover it. 00:38:01.147 [2024-05-15 20:29:53.458727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.459137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.459164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.459607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.460029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.460056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.460477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.460889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.460916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.461334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.461744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.461770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.462207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.462628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.462656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.463096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.463495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.463522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.463943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.464330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.464358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.464823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.465226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.465252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.465668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.466065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.466092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.466635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.467163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.467201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.467651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.468147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.468175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.468643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.469048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.469075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.469498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.469899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.469925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.470327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.470785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.470813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.471234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.471677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.471705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.472148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.472678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.472780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.473281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.473792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.473822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.474237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.474710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.474739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.475159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.475599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.475703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.476166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.476605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.476635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.476999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.477417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.477446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.477844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.478270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.478297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.478665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.479088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.148 [2024-05-15 20:29:53.479116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.148 qpair failed and we were unable to recover it. 00:38:01.148 [2024-05-15 20:29:53.479550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.479899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.479926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.480344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.480779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.480806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.481115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.481536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.481566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.481979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.482397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.482426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.482882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.483324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.483353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.483801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.484185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.484212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.484656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.485079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.485106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.485541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.485967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.485994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.486421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.486851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.486878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.487196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.487607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.487634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.488076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.488494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.488523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.488901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.489331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.489361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.489811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.490224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.490250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.490587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.491015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.491041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.491458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.491886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.491913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.492345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.492761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.492787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.493098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.493503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.493531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.493966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.494363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.494397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.494774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.495072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.495098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.495527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.495927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.495953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.496364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.496794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.496819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.497247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.497726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.497755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.498087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.498486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.498515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.498936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.499383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.499410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.499839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.500262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.500288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.500712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.501136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.501164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.501583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.502007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.502033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.502460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.502858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.502885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.503328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.503759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.503785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.149 qpair failed and we were unable to recover it. 00:38:01.149 [2024-05-15 20:29:53.504224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.504638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.149 [2024-05-15 20:29:53.504665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.505080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.505451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.505479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.505882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.506308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.506345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.506776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.507214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.507241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.507746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.508166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.508193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.508619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.509034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.509060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.509435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.509861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.509888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.510307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.510756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.510783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.511217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.511638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.511666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.512045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.512465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.512492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.512920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.513351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.513379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.513788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.514190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.514216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.514619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.515049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.515075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.515479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.515929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.515956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.516384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.516760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.516787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.517255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.517658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.517686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.518113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.518462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.518490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.518897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.519208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.519234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.519634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.520026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.520052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.520473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.520930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.520957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.521379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.521806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.521833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.150 [2024-05-15 20:29:53.522253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.522654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.150 [2024-05-15 20:29:53.522681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.150 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.523113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.523505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.523532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.523970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.524366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.524393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.524806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.525110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.525136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.525567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.525987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.526014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.526345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.526760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.526787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.527203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.527615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.527644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.528076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.528506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.528533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.528945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.529365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.529393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.529836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.530253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.530280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.530718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.531072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.531099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.531407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.531836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.531862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.532286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.532492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.532519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.532943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.533348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.533376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.533828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.534228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.534254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.534677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.535074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.535100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.535526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.535951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.535978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.536402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.536846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.536872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.537296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.537675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.537708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.538124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.538532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.538559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.538973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.539390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.539418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.539877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.540294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.540341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.151 qpair failed and we were unable to recover it. 00:38:01.151 [2024-05-15 20:29:53.540773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.151 [2024-05-15 20:29:53.541166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.541193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.541507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.541850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.541875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.542289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.542790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.542817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.543180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.543579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.543605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.544097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.544494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.544521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.544955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.545358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.545386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.545823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.546223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.546248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.546669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.546925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.546953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.547378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.547680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.547714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.548151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.548576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.548604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.549018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.549444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.549472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.549859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.550279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.550305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.550725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.551086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.551113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.551542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.551938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.551965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.552375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.552732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.552759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.553131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.553530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.553558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.553996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.554414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.554442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.554890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.555238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.555265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.555701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.556123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.556151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.556573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.556971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.556997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.557413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.557809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.557836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.152 qpair failed and we were unable to recover it. 00:38:01.152 [2024-05-15 20:29:53.558258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.152 [2024-05-15 20:29:53.558657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.558685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.559059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.559469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.559497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.559921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.560332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.560361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.560770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.561071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.561100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.561535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.561966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.561992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.562405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.562806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.562832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.563159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.563600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.563628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.564050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.564497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.564525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.564960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.565384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.565412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.565830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.566300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.566339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.566741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.567136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.567162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.567631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.568025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.568052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.568508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.568909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.568936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.569349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.569753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.569779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.570207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.570597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.570624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.571040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.571426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.571455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.571882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.572267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.572294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.572710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.573000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.573026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.573457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.573859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.573885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.574199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.574629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.574656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.575096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.575493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.575521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.575933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.576338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.576366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.576781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.577180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.577206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.577484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.577867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.577894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.578327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.578572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.578598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.578936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.579356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.579384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.579813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.580209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.580241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.580716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.581110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.581136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.581572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.581968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.153 [2024-05-15 20:29:53.581994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.153 qpair failed and we were unable to recover it. 00:38:01.153 [2024-05-15 20:29:53.582439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.582862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.582890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.583323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.583755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.583782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.584193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.584624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.584652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.585080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.585350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.585383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.585761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.586188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.586215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.586624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.587048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.587074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.587493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.587781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.587811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.588092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.588478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.588506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.588959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.589371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.589398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.589844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.590281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.590308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.590687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.591133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.591159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.591597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.592033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.592060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.592485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.592882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.592909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.593208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.593652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.593679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.594091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.594448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.594475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.594892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.595326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.595353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.595694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.596112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.596138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.596546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.596851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.596877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.597312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.597760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.597787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.598215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.598637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.598665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.599078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.599479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.599506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.599933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.600333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.600361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.600788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.601214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.601241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.601647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.602042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.602069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.602596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.603177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.603215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.154 qpair failed and we were unable to recover it. 00:38:01.154 [2024-05-15 20:29:53.603661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.604083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.154 [2024-05-15 20:29:53.604110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.604560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.604961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.604989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.605760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.606189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.606225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.606695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.607136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.607164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.607604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.608004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.608030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.608443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.608871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.608897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.609338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.609740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.609766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.610203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.610618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.610646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.611070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.611467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.611494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.611906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.612277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.612303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.612717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.613160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.613187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.613615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.613946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.613973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.614406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.614818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.614844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.615260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.615680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.615716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.616123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.616574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.616602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.616976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.617403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.617432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.617745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.618169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.618196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.618624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.619022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.619048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.619300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.619738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.619766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.620167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.620537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.620564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.620823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.621204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.621229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.621676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.621980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.622006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.622430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.622902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.622929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.623403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.623831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.623857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.624273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.624638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.624666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.625096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.625483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.625511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.625991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.626386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.626414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.626712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.627126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.627152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.627571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.628008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.628034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.628446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.628877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.628904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.155 qpair failed and we were unable to recover it. 00:38:01.155 [2024-05-15 20:29:53.629344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.155 [2024-05-15 20:29:53.629741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.629767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.156 qpair failed and we were unable to recover it. 00:38:01.156 [2024-05-15 20:29:53.630138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.630649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.630677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.156 qpair failed and we were unable to recover it. 00:38:01.156 [2024-05-15 20:29:53.631110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.631529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.631556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.156 qpair failed and we were unable to recover it. 00:38:01.156 [2024-05-15 20:29:53.631992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.632390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.632418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.156 qpair failed and we were unable to recover it. 00:38:01.156 [2024-05-15 20:29:53.632862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.633257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.633283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.156 qpair failed and we were unable to recover it. 00:38:01.156 [2024-05-15 20:29:53.633729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.634174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.634201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.156 qpair failed and we were unable to recover it. 00:38:01.156 [2024-05-15 20:29:53.634641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.635059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.635087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.156 qpair failed and we were unable to recover it. 00:38:01.156 [2024-05-15 20:29:53.635505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.635910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.635937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.156 qpair failed and we were unable to recover it. 00:38:01.156 [2024-05-15 20:29:53.636383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.636783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.636810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.156 qpair failed and we were unable to recover it. 00:38:01.156 [2024-05-15 20:29:53.637235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.637633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.637660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.156 qpair failed and we were unable to recover it. 00:38:01.156 [2024-05-15 20:29:53.638088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.638489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.156 [2024-05-15 20:29:53.638518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.156 qpair failed and we were unable to recover it. 00:38:01.156 [2024-05-15 20:29:53.638960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-05-15 20:29:53.639379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-05-15 20:29:53.639410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.423 qpair failed and we were unable to recover it. 00:38:01.423 [2024-05-15 20:29:53.639839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-05-15 20:29:53.640267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-05-15 20:29:53.640293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.423 qpair failed and we were unable to recover it. 00:38:01.423 [2024-05-15 20:29:53.640742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-05-15 20:29:53.641152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.641178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.641594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.642001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.642028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.642447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.642818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.642845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.643260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.643658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.643688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.644113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.644466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.644494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.644880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.645351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.645379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.645844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.646290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.646328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.646681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.647130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.647157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.647574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.647968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.647995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.648427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.648872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.648899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.649224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.649646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.649673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.650101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.650602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.650630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.651073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.651468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.651497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.651901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.652336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.652364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.652787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.653225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.653252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.653677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.654109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.654137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.654446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.654884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.654911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.655290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.655701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.655729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.656141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.656541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.656569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.656934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.657359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.657387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.657801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.658228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.658255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.658659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.659084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.659117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.659531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.659971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.659997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.660414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.660818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.660845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.661259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.661705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.661733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.662161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.662585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.662613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.663053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.663574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.663678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.664203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.664596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.664627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.424 [2024-05-15 20:29:53.665070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.665483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-05-15 20:29:53.665512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.424 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.665959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.666361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.666390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.666714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.667173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.667202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.667634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.668057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.668084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.668509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.668859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.668885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.669311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.669652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.669681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.670149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.670509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.670537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.670940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.671368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.671396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.671801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.672228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.672254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.672672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.673113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.673139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.673503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.673946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.673972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.674400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.674801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.674827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.675246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.675647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.675675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.676101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.676532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.676560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.676988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.677384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.677412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.677848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.678247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.678273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.678686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.679089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.679114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.679590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.679977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.680004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.680401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.680829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.680856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.681283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.681661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.681689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.682096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.682493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.682521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.682949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.683392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.683419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.683839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.684251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.684276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.684781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.685209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.685236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.685683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.686105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.686132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.686571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.686922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.686949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.687421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.687820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.687846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.688283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.688755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.688782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.689270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.689678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.689707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.690119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.690631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.425 [2024-05-15 20:29:53.690735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.425 qpair failed and we were unable to recover it. 00:38:01.425 [2024-05-15 20:29:53.691251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.691672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.691703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.692138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.692655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.692758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.693278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.693720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.693751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.694123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.694633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.694736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.695257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.695699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.695729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.696041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.696449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.696478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.696894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.697291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.697325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.697740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.698035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.698062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.698467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.698779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.698816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.699237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.699627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.699655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.700067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.700467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.700494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.700912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.701353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.701382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.701791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.702215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.702242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.702671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.702973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.703003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.703441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.703846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.703888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.704258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.704599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.704626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.705043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.705451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.705478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.705913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.706310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.706350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.706789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.707188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.707214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.707491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.707919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.707945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.708322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.708752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.708778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.709096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.709405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.709436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.709861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.710261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.710287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.710738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.711140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.711166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.711527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.711959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.711985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.712412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.712877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.712904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.713350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.713787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.713814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.714190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.714611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.714638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.715054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.715441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.715468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.426 [2024-05-15 20:29:53.715783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.716214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.426 [2024-05-15 20:29:53.716240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.426 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.716653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.717055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.717081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.717507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.717929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.717956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.718342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.718670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.718695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.719160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.719566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.719594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.720009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.720404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.720431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.720835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.721265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.721291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.721713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.722140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.722166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.722573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.722972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.722998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.723411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.723855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.723882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.724248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.724652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.724679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.725088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.725544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.725573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.726001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.726437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.726466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.726913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.727280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.727306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.727802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.728199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.728225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.728640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.728952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.728977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.729402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.729797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.729823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.730241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.730744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.730771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.731169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.731565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.731593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.732032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.732437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.732465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.732951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.733344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.733371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.733827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.734132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.734164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.734564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.734954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.734981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.735386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.735850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.735876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.736304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.736802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.736830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.737248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.737648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.737676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.738102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.738530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.738566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.738963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.739444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.739471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.739890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.740323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.740352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.427 [2024-05-15 20:29:53.740835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.741184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.427 [2024-05-15 20:29:53.741210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.427 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.741630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.742055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.742081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.742492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.742913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.742940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.743271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.743743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.743771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.744209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.744651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.744678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.745086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.745460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.745488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.745892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.746333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.746361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.746800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.747201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.747228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.747642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.747941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.747967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.748423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.748848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.748874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.749286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.749806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.749834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.750275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.750714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.750742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.751160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.751673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.751778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.752295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.752772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.752800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.753217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.753632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.753660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.754092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.754632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.754734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.755130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.755559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.755591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.756017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.756532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.756638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.757197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.757642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.757672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.758012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.758441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.758469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.758910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.759263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.759290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.759714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.760154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.760182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.760600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.761002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.761029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.761453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.761914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.761941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.428 qpair failed and we were unable to recover it. 00:38:01.428 [2024-05-15 20:29:53.762382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.428 [2024-05-15 20:29:53.762785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.762811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.763171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.763565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.763592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.763998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.764342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.764371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.764782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.765181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.765208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.765630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.766051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.766078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.766506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.766904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.766931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.767247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.767674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.767702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.768214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.768646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.768674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.769084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.769507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.769535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.769966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.770364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.770392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.770805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.771199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.771227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.771663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.772064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.772091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.772458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.772764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.772790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.773217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.773631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.773658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.774034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.774454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.774481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.774926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.775331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.775358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.775793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.776193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.776221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.776645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.777103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.777130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.777438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.777839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.777865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.778298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.778744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.778771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.779185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.779614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.779642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.780078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.780493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.780522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.780867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.781293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.781333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.781801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.782106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.782134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.782590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.782985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.783018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.783431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.783841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.783868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.784185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.784580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.784609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.785035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.785434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.785464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.785899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.786326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.786353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.429 qpair failed and we were unable to recover it. 00:38:01.429 [2024-05-15 20:29:53.786791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.429 [2024-05-15 20:29:53.787139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.787165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.787656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.787953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.787982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.788419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.788852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.788879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.789197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.789511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.789539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.789837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.790248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.790274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.790647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.791061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.791087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.791519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.791957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.791983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.792436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.792764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.792790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.793201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.793622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.793649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.794060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.794535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.794562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.794967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.795366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.795393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.795808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.796237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.796265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.796694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.797122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.797149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.797462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.797867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.797893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.798331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.798743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.798769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.799185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.799488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.799516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.799954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.800378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.800406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.800834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.801196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.801222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.801639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.802038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.802064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.802478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.802915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.802940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.803341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.803741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.803767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.804211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.804644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.804672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.805099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.805523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.805553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.805985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.806343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.806371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.806803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.807201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.807227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.807617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.808015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.808042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.808475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.808898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.808926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.809253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.809654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.809683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.809973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.810394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.810422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.430 qpair failed and we were unable to recover it. 00:38:01.430 [2024-05-15 20:29:53.810838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.430 [2024-05-15 20:29:53.811143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.811177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.811493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.811927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.811956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.812376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.812687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.812715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.813152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.813551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.813580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.813996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.814398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.814426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.814858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.815285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.815321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.815729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.816126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.816153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.816553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.816981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.817008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.817308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.817745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.817772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.818097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.818560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.818588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.819010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.819430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.819458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.819951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.820352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.820379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.820837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.821233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.821260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.821676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.822079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.822106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.822539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.822942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.822968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.823404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.823783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.823809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.824232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.824652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.824679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.825147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.825507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.825543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.825966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.826389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.826418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.826846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.827277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.827303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.827721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.828121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.828147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.828564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.828930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.828957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.829254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.829670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.829698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.830167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.830589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.830618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.830984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.831397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.831425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.831869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.832270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.832296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.832731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.833020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.833047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.833486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.833905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.833932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.834429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.834823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.834850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.835169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.835591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.835618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.431 qpair failed and we were unable to recover it. 00:38:01.431 [2024-05-15 20:29:53.836036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.836337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.431 [2024-05-15 20:29:53.836367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.836824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.837250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.837276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.837724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.838124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.838151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.838555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.838984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.839010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.839389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.839811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.839838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.840271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.840673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.840700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.841125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.841548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.841575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.841977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.842363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.842391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.842806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.843193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.843219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.843622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.843929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.843956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.844370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.844784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.844811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.845248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.845651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.845679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.846096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.846499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.846527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.846964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.847364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.847391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.847804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.848236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.848264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.848690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.849111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.849138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.849462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.849875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.849901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.850345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.850778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.850804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.851134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.851568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.851596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.852030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.852332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.852360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.852818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.853175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.853201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.853617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.853899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.853926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.854362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.854780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.854806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.855159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.855458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.855485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.855954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.856351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.856379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.856819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.857235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.857262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.857713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.858118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.858145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.432 qpair failed and we were unable to recover it. 00:38:01.432 [2024-05-15 20:29:53.858577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.859004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.432 [2024-05-15 20:29:53.859031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.859448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.859858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.859891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.860295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.860614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.860646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.861083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.861509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.861536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.862008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.862396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.862424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.862853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.863252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.863278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.863606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.864033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.864060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.864384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.864808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.864835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.865271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.865670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.865697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.866127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.866521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.866549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.866949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.867347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.867375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.867762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.868172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.868198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.868513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.868939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.868966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.869336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.869772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.869800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.870225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.870616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.870645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.871082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.871439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.871466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.871872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.872271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.872298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.872746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.873145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.873172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.873654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.874061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.874087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.874528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.874938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.874965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.875392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.875844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.875871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.876195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.876586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.876614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.877047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.877454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.877482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.877899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.878204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.878230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.878626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.879040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.879066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.879381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.879818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.879844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.880277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.880678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.880705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.881112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.881465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.433 [2024-05-15 20:29:53.881494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.433 qpair failed and we were unable to recover it. 00:38:01.433 [2024-05-15 20:29:53.881939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.882370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.882398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.882847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.883249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.883276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.883687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.884085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.884113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.884428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.884835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.884861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.885295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.885762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.885789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.886203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.886628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.886658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.887085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.887481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.887508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.887953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.888265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.888291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.888739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.889088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.889114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.889532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.889943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.889970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.890351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.890778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.890805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.891249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.891646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.891675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.892100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.892493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.892520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.892974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.893373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.893400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.893826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.894242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.894269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.894718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.895096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.895123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.895542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.895798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.895826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.896256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.896651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.896679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.897084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.897512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.897539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.897997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.898419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.898447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.898876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.899273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.899300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.899725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.900143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.900169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.900593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.900892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.900922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.901353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.901787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.901813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.902235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.902631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.902666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.903092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.903514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.903542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.903974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.904376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.904404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.904812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.905208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.905234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.905647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.906023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.434 [2024-05-15 20:29:53.906050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.434 qpair failed and we were unable to recover it. 00:38:01.434 [2024-05-15 20:29:53.906421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.906849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.906877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.435 qpair failed and we were unable to recover it. 00:38:01.435 [2024-05-15 20:29:53.907309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.907749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.907776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.435 qpair failed and we were unable to recover it. 00:38:01.435 [2024-05-15 20:29:53.908190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.908656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.908684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.435 qpair failed and we were unable to recover it. 00:38:01.435 [2024-05-15 20:29:53.909112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.909427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.909454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.435 qpair failed and we were unable to recover it. 00:38:01.435 [2024-05-15 20:29:53.909749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.910137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.910164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.435 qpair failed and we were unable to recover it. 00:38:01.435 [2024-05-15 20:29:53.910588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.911046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.911073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.435 qpair failed and we were unable to recover it. 00:38:01.435 [2024-05-15 20:29:53.911497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.911893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.911919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.435 qpair failed and we were unable to recover it. 00:38:01.435 [2024-05-15 20:29:53.912228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.912631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.912659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.435 qpair failed and we were unable to recover it. 00:38:01.435 [2024-05-15 20:29:53.913079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.913573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.913601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.435 qpair failed and we were unable to recover it. 00:38:01.435 [2024-05-15 20:29:53.914001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.914400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.914428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.435 qpair failed and we were unable to recover it. 00:38:01.435 [2024-05-15 20:29:53.914845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.915268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.915295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.435 qpair failed and we were unable to recover it. 00:38:01.435 [2024-05-15 20:29:53.915732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.916157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.916185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.435 qpair failed and we were unable to recover it. 00:38:01.435 [2024-05-15 20:29:53.916612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.917011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.917037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.435 qpair failed and we were unable to recover it. 00:38:01.435 [2024-05-15 20:29:53.917468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.917892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.435 [2024-05-15 20:29:53.917918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.435 qpair failed and we were unable to recover it. 00:38:01.435 [2024-05-15 20:29:53.918341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.918749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.918777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.919079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.919509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.919536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.919937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.920286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.920335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.920759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.921191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.921218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.921489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.921881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.921908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.922214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.922574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.922604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.923034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.923428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.923455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.923895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.924327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.924355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.924748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.925151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.925177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.925593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.926006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.926033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.926447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.926879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.926906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.927410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.927689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.927716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.928141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.928541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.928569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.928967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.929419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.929448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.929894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.930291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.930328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.930721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.931123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.931150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.931455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.931883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.931910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.932348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.932771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.932798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.933214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.933643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.933672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.934014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.934426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.934455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.934874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.935259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.935287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.935738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.936149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.936175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.936556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.936977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.937003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.937435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.937870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.937896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.938210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.938625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.938652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.939091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.939389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.939419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.939863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.940276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.940302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.708 [2024-05-15 20:29:53.940645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.941110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.708 [2024-05-15 20:29:53.941137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.708 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.941546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.941917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.941943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.942348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.942775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.942801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.943218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.943723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.943752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.944160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.944661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.944688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.945106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.945578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.945693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.946206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.946543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.946573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.947033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.947467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.947496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.947950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.948348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.948375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.948819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.949201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.949228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.949502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.949945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.949972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.950415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.950844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.950872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.951288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.951727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.951755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.952173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.952656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.952683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.953102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.953504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.953532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.953948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.954234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.954261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.954706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.955066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.955093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.955461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.955860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.955885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.956289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.956719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.956746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.956932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.957378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.957407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.957838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.958326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.958353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.958771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.959174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.959201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.959649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.960003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.960030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.960461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.960877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.960904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.961276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.961705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.961733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.962149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.962550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.962578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.962953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.963378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.963407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.963845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.964263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.964289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.964732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.965090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.965117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.965635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.966052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.966077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.709 qpair failed and we were unable to recover it. 00:38:01.709 [2024-05-15 20:29:53.966614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.709 [2024-05-15 20:29:53.967138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.967175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.967664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.968089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.968117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.968555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.968978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.969005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.969417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.969826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.969853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.970285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.970578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.970606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.970981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.971403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.971433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.971907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.972348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.972378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.972826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.973220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.973247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.973665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.974066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.974093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.974543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.974943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.974971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.975275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.975630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.975659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.976085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.976505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.976533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.976955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.977261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.977288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.977718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.978161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.978187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.978617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.979048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.979074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.979399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.979827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.979854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.980288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.980730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.980764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.981237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.981575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.981608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.982013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.982327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.982355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.982804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.983198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.983224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.983681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.984095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.984121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.984538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.984952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.984979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.985407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.985826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.985854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.986293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.986605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.986633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.987042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.987437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.987465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.987870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.988304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.988344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.988706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.989137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.989163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.989585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.989982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.990008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.990438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.990804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.990831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.710 [2024-05-15 20:29:53.991258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.991652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.710 [2024-05-15 20:29:53.991681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.710 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:53.992100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.992501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.992530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:53.992943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.993357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.993385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:53.993824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.994221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.994248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:53.994676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.995111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.995139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:53.995565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.996014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.996040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:53.996473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.996877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.996905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:53.997647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.998064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.998097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:53.998530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.998953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.998980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:53.999422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.999841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:53.999869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.000166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.000561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.000591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.001007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.001423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.001452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.001889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.002290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.002333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.002753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.003153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.003179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.003597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.003998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.004025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.004497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.004878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.004905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.005344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.005735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.005761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.006188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.006607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.006634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.007038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.007408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.007438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.007871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.008275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.008303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.008755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.009152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.009178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.009445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.009871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.009898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.010323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.010641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.010667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.011096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.011499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.011527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.011945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.012343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.012372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.012800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.013117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.013143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.013625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.014021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.014047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.014478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.014876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.014902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.015328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.015747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.015774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.016206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.016628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.711 [2024-05-15 20:29:54.016658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.711 qpair failed and we were unable to recover it. 00:38:01.711 [2024-05-15 20:29:54.017090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.017513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.017541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.017952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.018350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.018377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.018766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.019210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.019236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.019641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.020044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.020070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.020487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.020884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.020911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.021338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.021745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.021772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.022086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.022515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.022544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.022974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.023376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.023404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.023814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.024219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.024252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.024633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.024965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.024993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.025416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.025819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.025845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.026274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.026668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.026695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.027110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.027515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.027543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.027980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.028374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.028403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.028847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.029152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.029179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.029597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.029996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.030023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.030356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.030775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.030802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.031231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.031646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.031674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.032087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.032353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.032381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.032802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.033206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.033233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.033590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.034008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.034035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.034464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.034766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.034796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.035233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.035508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.035536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.035940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.036260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.036287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.712 [2024-05-15 20:29:54.036743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.037154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.712 [2024-05-15 20:29:54.037182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.712 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.037639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.038043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.038069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.038487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.038901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.038928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.039350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.039790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.039818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.040262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.040567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.040595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.041034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.041430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.041458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.041886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.042236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.042262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.042669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.043110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.043137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.043444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.043885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.043912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.044344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.044772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.044799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.045065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.045412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.045441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.045886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.046212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.046238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.046692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.047091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.047117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.047443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.047884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.047910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.048333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.048821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.048849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.049277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.049740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.049769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.050194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.050579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.050607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.051011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.051434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.051461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.051878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.052182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.052208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.052624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.053044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.053072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.053398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.053720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.053753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.054076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.054485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.054514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.054968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.055366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.055393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.055708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.056134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.056160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.056595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.056992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.057018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.057421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.057847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.057873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.058294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.058731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.058760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.059046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.059435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.059463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.059806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.060245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.060272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.060638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.061039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.061065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.713 qpair failed and we were unable to recover it. 00:38:01.713 [2024-05-15 20:29:54.061382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.061816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.713 [2024-05-15 20:29:54.061843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.062248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.062648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.062676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.063130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.063450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.063477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.063931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.064331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.064359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.064798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.065204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.065230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.065635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.066082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.066116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.066542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.066936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.066963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.067407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.067837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.067864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.068285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.068782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.068811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.069249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.069643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.069673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.070116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.070644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.070747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.071247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.071683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.071714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.072138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.072693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.072797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.073337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.073761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.073791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.074212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.074635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.074667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.074992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.075454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.075485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.075808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.076228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.076255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.076672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.077070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.077098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.077552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.077954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.077982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.078409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.078802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.078828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.079269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.079682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.079711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.080126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.080550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.080578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.080997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.081396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.081425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.081870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.082284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.082312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.082655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.083107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.083134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.083501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.083929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.083955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.084391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.084809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.084837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.085281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.085756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.085785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.086096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.086456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.086484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.086961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.087272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.714 [2024-05-15 20:29:54.087299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.714 qpair failed and we were unable to recover it. 00:38:01.714 [2024-05-15 20:29:54.087737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.088170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.088196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.088620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.089049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.089076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.089510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.089874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.089901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.090341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.090753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.090780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.091180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.091707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.091734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.092130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.092529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.092557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.092974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.093443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.093474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.093889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.094307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.094349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.094810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.095229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.095256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.095683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.096087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.096114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.096530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.096971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.097001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.097400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.097702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.097729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.098104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.098427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.098455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.098776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.099198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.099225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.099543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.099851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.099879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.100326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.100731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.100758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.101198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.101666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.101700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.102113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.102560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.102589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.103012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.103373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.103402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.103835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.104236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.104264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.104707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.105125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.105152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.105568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.105998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.106026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.106434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.106859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.106887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.107329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.107760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.107787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.108200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.108675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.108703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.109129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.109545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.109573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.109993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.110397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.110424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.110826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.111272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.111298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.111702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.112113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.112140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.112539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.112947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.715 [2024-05-15 20:29:54.112974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.715 qpair failed and we were unable to recover it. 00:38:01.715 [2024-05-15 20:29:54.113392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.113794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.113820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.114253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.114729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.114758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.115200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.115618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.115646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.116075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.116389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.116423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.116867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.117299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.117340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.117750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.118152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.118178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.118598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.119012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.119039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.119467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.119897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.119924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.120363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.120762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.120789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.121221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.121635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.121664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.122076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.122544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.122572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.123000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.123464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.123493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.123906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.124341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.124369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.124792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.125226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.125252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.125682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.126087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.126113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.126638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.127215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.127252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.127771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.128167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.128194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.128619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.129039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.129068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.129491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.129875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.129902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.130339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.130739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.130767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.131194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.131616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.131646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.132077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.132476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.132505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.132939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.133410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.133440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.133831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.134232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.134259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.134727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.135138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.135164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.135615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.136026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.136054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.136466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.136864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.136890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.716 qpair failed and we were unable to recover it. 00:38:01.716 [2024-05-15 20:29:54.137293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.137734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.716 [2024-05-15 20:29:54.137762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.138146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.138610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.138638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.139085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.139620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.139725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.140246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.140629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.140659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.141079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.141517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.141545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.141950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.142340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.142368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.142798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.143215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.143241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.143604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.143921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.143947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.144431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.144913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.144939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.145351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.145769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.145795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.146215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.146657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.146697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.147146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.147566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.147595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.148010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.148403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.148444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.148861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.149306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.149366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.149808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.150123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.150159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.150575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.150980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.151007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.151408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.151841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.151867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.152332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.152755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.152783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.153228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.153659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.153686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.154130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.154526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.154553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.154837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.155271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.717 [2024-05-15 20:29:54.155298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.717 qpair failed and we were unable to recover it. 00:38:01.717 [2024-05-15 20:29:54.155798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.156197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.156224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.156680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.157091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.157119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.157531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.157952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.157978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.158377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.158820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.158846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.159284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.159723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.159751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.160133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.160524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.160554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.160967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.161394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.161422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.161858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.162213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.162238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.162653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.163049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.163076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.163507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.163941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.163969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.164266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.164713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.164742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.165148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.165584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.165613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.166030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.166420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.166449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.166882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.167286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.167323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.167746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.168184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.168213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.168649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.169065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.169092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.169415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.169843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.169869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.170303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.170748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.170775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.171189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.171659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.171685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.172116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.172534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.172562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.172981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.173403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.173432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.173871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.174266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.174294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.174772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.175173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.175200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.175614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.176014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.176040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.176464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.176886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.176912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.177357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.177681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.177708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.178176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.178572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.178601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.179035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.179443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.179472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.179791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.180254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.180282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.180606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.181027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.718 [2024-05-15 20:29:54.181054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.718 qpair failed and we were unable to recover it. 00:38:01.718 [2024-05-15 20:29:54.181495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.181913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.181939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.182345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.182784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.182810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.183120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.183499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.183527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.183903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.184333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.184361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.184860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.185269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.185296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.185707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.186135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.186162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.186626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.187042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.187071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.187490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.187948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.187975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.188287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.188722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.188749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.189180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.189617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.189644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.190098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.190497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.190542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.190967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.191204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.191231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.191633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.192058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.192086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.192509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.192936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.192963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.193380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.193780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.193807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.194241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.194655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.194684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.195097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.195574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.195601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.196005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.196437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.196466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.196788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.197257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.197284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:01.719 [2024-05-15 20:29:54.197735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.198143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.719 [2024-05-15 20:29:54.198171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:01.719 qpair failed and we were unable to recover it. 00:38:02.028 [2024-05-15 20:29:54.198491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.028 [2024-05-15 20:29:54.198861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.198891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.199220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.199660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.199688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.200133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.200575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.200603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.201035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.201298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.201337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.201733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.202159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.202187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.202591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.203016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.203042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.203464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.203875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.203904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.204333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.204752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.204778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.205211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.205638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.205665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.206103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.206542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.206570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.206890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.207312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.207352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.207801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.208206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.208232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.208707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.209133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.209160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.209668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.210012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.210038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.210522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.210936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.210962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.211385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.211780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.211806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.212226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.212632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.212659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.213044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.213458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.213485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.213903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.214330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.214358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.214836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.215252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.215278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.215711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.216116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.216143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.216679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.217245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.217284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.217787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.218246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.218273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.218588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.219013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.219040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.219459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.219900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.219926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.220367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.220689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.220718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.221042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.221469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.221498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.221953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.222352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.222379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.222889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.223281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.223307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.223709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.224165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.029 [2024-05-15 20:29:54.224192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.029 qpair failed and we were unable to recover it. 00:38:02.029 [2024-05-15 20:29:54.224636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.225013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.225040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.225276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.225736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.225773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.226031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.226440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.226468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.226889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.227265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.227291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.227803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.228230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.228257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.228686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.229134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.229162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.229580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.229969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.229997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.230344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.230903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.230931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.231359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.231820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.231847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.232299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.232753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.232780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.233224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.233541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.233577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.233999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.234416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.234445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.234765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.235128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.235155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.235469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.235951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.235977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.236296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.236737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.236764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.237078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.237402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.237431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.237874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.238295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.238336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.238896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.239290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.239331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.239766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.240182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.240209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.240678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.241080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.241106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.241570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.241782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.241811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.242239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.242602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.242630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.242988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.243382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.243409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.243771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.244100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.244126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.244460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.244900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.244928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.245338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.245762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.245789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.246192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.246628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.246656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.247086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.247469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.247496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.247935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.248360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.248389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.248820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.249190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.030 [2024-05-15 20:29:54.249217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.030 qpair failed and we were unable to recover it. 00:38:02.030 [2024-05-15 20:29:54.249665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.249973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.250000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.250491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.250892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.250918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.251378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.251798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.251825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.252257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.252510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.252539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.252887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.253333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.253361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.253758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.254244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.254270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.254759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.255178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.255204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.255692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.256093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.256119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.256476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.256930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.256958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.257398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.257850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.257877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.258351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.258769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.258796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.259180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.259652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.259681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.260127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.260506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.260534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.260966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.261409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.261436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.261914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.262217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.262244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.262682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.263079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.263106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.263571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.264044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.264070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.264502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.264912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.264939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.265252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.265672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.265700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.266130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.266686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.266789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.267310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.267760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.267789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.268211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.268593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.268623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.269063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.269627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.269743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.270260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.270704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.270735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.271168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.271574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.271602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.272037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.272447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.272477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.272891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.273341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.273368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.273801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.274228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.274255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.274671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.274981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.275015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.275442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.275867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.031 [2024-05-15 20:29:54.275894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.031 qpair failed and we were unable to recover it. 00:38:02.031 [2024-05-15 20:29:54.276338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.276746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.276774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.277078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.277473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.277502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.277939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.278349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.278377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.278800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.279210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.279238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.279767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.280248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.280275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.280681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.281102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.281130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.281659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.282239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.282277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.282684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.283115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.283143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.283553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.283963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.283990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.284345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.284782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.284808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.285220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.285549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.285578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.286003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.286407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.286437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.286925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.287352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.287381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.287846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.288147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.288179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.288623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.289041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.289069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.289509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.289916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.289942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.290346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.290770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.290798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.291231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.291628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.291658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.292095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.292447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.292476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.292940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.293339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.293370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.293811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.294168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.294195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.294498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.294904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.294931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.295351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.295798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.295825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.296222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.296648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.296677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.297080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.297507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.297535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.297961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.298357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.298387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.298686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.299111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.299139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.299546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.299944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.299970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.300385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.300791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.300820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.032 [2024-05-15 20:29:54.301188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.301626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.032 [2024-05-15 20:29:54.301655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.032 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.302053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.302478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.302506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.302828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.303244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.303271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.303718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.304128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.304154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.304567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.305003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.305031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.305439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.305879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.305906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.306343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.306772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.306799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.307112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.307548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.307576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.308009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.308410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.308438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.308899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.309298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.309341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.309750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.310151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.310178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.310597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.311022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.311049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.311463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.311842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.311868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.312264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.312745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.312772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.313207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.313627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.313661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.314088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.314521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.314550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.314977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.315380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.315409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.315847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.316255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.316283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.316763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.317165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.317193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.317690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.318091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.318118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.318547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.318991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.319018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.319446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.319755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.319781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.033 qpair failed and we were unable to recover it. 00:38:02.033 [2024-05-15 20:29:54.320214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.320638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.033 [2024-05-15 20:29:54.320667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.321096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.321494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.321523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.321840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.322219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.322245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.322640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.322949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.322976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.323408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.323862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.323891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.324351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.324765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.324794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.325220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.325647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.325675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.326087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.326487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.326516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.326942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.327299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.327338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.327776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.328135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.328162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.328577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.329004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.329030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.329452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.329861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.329888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.330313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.330755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.330782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.331196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.331627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.331655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.332088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.332481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.332510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.332960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.333388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.333419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.333849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.334276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.334303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.334679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.334981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.335013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.335450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.335862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.335889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.336201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.336632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.336661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.337081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.337482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.337509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.337882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.338341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.338370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.338814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.339210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.339236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.339656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.340079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.340107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.340487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.340921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.340948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.341368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.341665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.341691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.341996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.342399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.342428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.342810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.343226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.343251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.343691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.344102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.344128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.344552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.344955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.344981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.034 qpair failed and we were unable to recover it. 00:38:02.034 [2024-05-15 20:29:54.345410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.345812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.034 [2024-05-15 20:29:54.345840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.346287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.346725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.346754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.347173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.347656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.347684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.348089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.348495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.348529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.348935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.349244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.349273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.349760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.350160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.350187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.350571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.350992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.351019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.351429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.351861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.351888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.352348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.352752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.352780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.353192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.353660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.353688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.354118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.354541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.354570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.355000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.355397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.355425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.355787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.356221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.356247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.356659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.357058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.357085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.357544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.357966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.357993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.358385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.358813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.358841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.359261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.359673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.359703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.360196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.360537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.360566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.360996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.361424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.361452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.361861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.362288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.362347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.362689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.363118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.363144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.363553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.363943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.363969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.364400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.364811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.364838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.365279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.365721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.365749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.366179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.366657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.366686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.367104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.367531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.367561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.367986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.368406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.368434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.368915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.369327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.369354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.369794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.370219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.370245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.370451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.370903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.370931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.035 [2024-05-15 20:29:54.371357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.371815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.035 [2024-05-15 20:29:54.371842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.035 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.372293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.372721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.372748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.373176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.373571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.373600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.374012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.374411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.374439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.374860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.375283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.375310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.375735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.376142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.376168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.376469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.376901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.376928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.377341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.377756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.377782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.378137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.378539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.378569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.378996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.379393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.379420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.379827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.380227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.380253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.380696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.381094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.381121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.381559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.381993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.382019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.382437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.382851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.382878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.383305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.383759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.383788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.384214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.384647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.384676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.385112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.385512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.385540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.385951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.386340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.386368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.386698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.387075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.387101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.387413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.387835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.387863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.388238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.388597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.388626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.389045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.389443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.389471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.389882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.390328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.390359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.390812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.391118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.391144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.391566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.392007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.392039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.392353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.392551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.392577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.393004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.393484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.393513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.393948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.394347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.394375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.394887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.395275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.395303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.395503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.395983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.396010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.036 [2024-05-15 20:29:54.396440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.396826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.036 [2024-05-15 20:29:54.396854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.036 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.397244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.397649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.397678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.398108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.398397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.398425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.398860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.399255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.399281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.399541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.399980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.400006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.400416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.400837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.400864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.401269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.401597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.401630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.402062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.402463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.402492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.402808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.403238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.403265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.403695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.404095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.404122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.404552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.404977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.405004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.405424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.405859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.405886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.406335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.406707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.406735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.407163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.407574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.407603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.408033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.408430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.408459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.408871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.409333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.409361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.409794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.410194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.410221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.410628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.411034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.411061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.411476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.411875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.411902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.412402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.412807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.412835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.413124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.413433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.413460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.413891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.414323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.414353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.414734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.415149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.415175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.415600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.416003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.416029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.416461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.416967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.416994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.417460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.417856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.417885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.418193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.418510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.037 [2024-05-15 20:29:54.418540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.037 qpair failed and we were unable to recover it. 00:38:02.037 [2024-05-15 20:29:54.418970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.419371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.419400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.419840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.420141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.420167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.420524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.420951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.420979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.421420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.421741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.421769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.422198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.422539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.422569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.422983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.423289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.423329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.423819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.424247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.424274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.424707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.425127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.425154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.425482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.425961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.425988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.426427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.426860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.426887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.427301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.427649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.427676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.428137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.428541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.428569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.428985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.429406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.429437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.429876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.430305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.430346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.430817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.431216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.431244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.431556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.432006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.432033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.432449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.432900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.432927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.433361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.433795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.433822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.434238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.436616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.436689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.437248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.439376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.439437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.439881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.440306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.440346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.440804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.441226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.441252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.441705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.441923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.441950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.442379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.442808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.442836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.443246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.443501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.443529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.443961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.444383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.444411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.444858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.445209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.445236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.445710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.446116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.446142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.446563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.446959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.446986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.447301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.447801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.038 [2024-05-15 20:29:54.447828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.038 qpair failed and we were unable to recover it. 00:38:02.038 [2024-05-15 20:29:54.448264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.448571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.448598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.449031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.449513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.449542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.449985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.450385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.450413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.450823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.451220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.451246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.451708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.452121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.452148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.452556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.453003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.453030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.453447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.453944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.453972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.454428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.454840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.454868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.455245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.455660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.455688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.456122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.456541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.456570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.457009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.457458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.457487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.457918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.458341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.458369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.458811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.459215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.459242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.459619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.460083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.460109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.460443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.460867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.460893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.461205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.461497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.461526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.461908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.462355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.462385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.462810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 325329 Killed "${NVMF_APP[@]}" "$@" 00:38:02.039 [2024-05-15 20:29:54.463264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.463291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 20:29:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:02.039 [2024-05-15 20:29:54.463701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 20:29:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:02.039 [2024-05-15 20:29:54.464112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.464141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 20:29:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:02.039 [2024-05-15 20:29:54.464549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 20:29:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:02.039 20:29:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:02.039 [2024-05-15 20:29:54.464998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.465025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.465454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.465885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.465914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.466335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.466688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.466715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.467012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.467421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.467448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.467763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.468210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.468236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.468639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.468956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.468984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.469453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.469748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.469777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.470130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.470573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.470601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.471006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.471376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.471414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.039 qpair failed and we were unable to recover it. 00:38:02.039 [2024-05-15 20:29:54.471729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.039 [2024-05-15 20:29:54.472122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.472152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.472463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.472850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.472880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 20:29:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=326200 00:38:02.040 [2024-05-15 20:29:54.473286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 20:29:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 326200 00:38:02.040 [2024-05-15 20:29:54.473746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.473777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 20:29:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 326200 ']' 00:38:02.040 20:29:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:02.040 20:29:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:02.040 [2024-05-15 20:29:54.474214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 20:29:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:02.040 [2024-05-15 20:29:54.474628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 20:29:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:02.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:02.040 [2024-05-15 20:29:54.474661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 20:29:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:02.040 20:29:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:02.040 [2024-05-15 20:29:54.475100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.475535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.475565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.475989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.476414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.476447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.476892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.477329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.477362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.477796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.478220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.478256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.478691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.479108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.479140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.479573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.480004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.480037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.480466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.480879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.480910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.481345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.481724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.481754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.482173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.482573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.482605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.483013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.483351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.483382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.483766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.484134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.484163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.484587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.485003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.485031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.485470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.485930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.485966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.486385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.486700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.486728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.487166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.487588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.487617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.488032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.488389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.488420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.488857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.489275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.489305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.489785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.490204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.490233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.490538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.490959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.490990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.491302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.491820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.491850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.492286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.492735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.492766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.493185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.493639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.040 [2024-05-15 20:29:54.493670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.040 qpair failed and we were unable to recover it. 00:38:02.040 [2024-05-15 20:29:54.494109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.494533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.494562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.494948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.495390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.495419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.495852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.496264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.496295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.496615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.497046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.497076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.497521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.497951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.497983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.498422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.498855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.498886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.499270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.499627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.499657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.500073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.500496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.500525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.500974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.501397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.501427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.503410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.503930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.503967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.505890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.506365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.506411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.506734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.508423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.508484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.508831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.509269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.509298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.509737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.510165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.510194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.510684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.511112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.511143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.511569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.512014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.512044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.512495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.512916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.512945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.513355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.513784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.513815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.514222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.514620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.514652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.515077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.515509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.515539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.515988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.516410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.516443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.516754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.517219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.517249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.517693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.518118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.518147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.518429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.518884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.518913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.519218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.519511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.519541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.519966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.520353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.520383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.520682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.521116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.521144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.521570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.521993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.041 [2024-05-15 20:29:54.522022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.041 qpair failed and we were unable to recover it. 00:38:02.041 [2024-05-15 20:29:54.522464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.522893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.522925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.307 qpair failed and we were unable to recover it. 00:38:02.307 [2024-05-15 20:29:54.523346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.523791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.523820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.307 qpair failed and we were unable to recover it. 00:38:02.307 [2024-05-15 20:29:54.524258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.524661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.524693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.307 qpair failed and we were unable to recover it. 00:38:02.307 [2024-05-15 20:29:54.525162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.525488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.525525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.307 qpair failed and we were unable to recover it. 00:38:02.307 [2024-05-15 20:29:54.525955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.526378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.526409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.307 qpair failed and we were unable to recover it. 00:38:02.307 [2024-05-15 20:29:54.526840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.527182] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:38:02.307 [2024-05-15 20:29:54.527246] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:02.307 [2024-05-15 20:29:54.527261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.527289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.307 qpair failed and we were unable to recover it. 00:38:02.307 [2024-05-15 20:29:54.527622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.527949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.527976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.307 qpair failed and we were unable to recover it. 00:38:02.307 [2024-05-15 20:29:54.528407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.528874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.528904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.307 qpair failed and we were unable to recover it. 00:38:02.307 [2024-05-15 20:29:54.529350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.529777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.529808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.307 qpair failed and we were unable to recover it. 00:38:02.307 [2024-05-15 20:29:54.530172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.530496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.307 [2024-05-15 20:29:54.530527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.307 qpair failed and we were unable to recover it. 00:38:02.307 [2024-05-15 20:29:54.530966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.531391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.531423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.531870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.532306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.532347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.532686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.532961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.532991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.533424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.533903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.533933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.534340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.534739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.534769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.535170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.535572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.535603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.536039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.536281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.536311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.536723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.536967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.536998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.537460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.537888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.537917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.538339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.538771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.538801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.539238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.539674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.539704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.540129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.540504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.540536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.540840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.541214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.541243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.541663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.541973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.542002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.542439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.542742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.542771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.543185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.543610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.543639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.544094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.544520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.544552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.544970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.545236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.545265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.545718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.546148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.546177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.546677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.547075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.547104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.547544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.547974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.548008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.548329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.548760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.548791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.549228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.549506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.549536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.549951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.550418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.550448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.550859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.551285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.551332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.551766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.552155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.552184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.552475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.552933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.552962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.553389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.553825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.553854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.554294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.554677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.554708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.308 [2024-05-15 20:29:54.555112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.555563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.308 [2024-05-15 20:29:54.555593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.308 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.555774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.556233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.556262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.556688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.557111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.557142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.557579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.557951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.557981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.558400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.558801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.558836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.559264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.559674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.559709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.560134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.560554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.560585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.561022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.561453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.561484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.561935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.562356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.562387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.562813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.563167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.563196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.563634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.564060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.564091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.564364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.564822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.564853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.565265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.565715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.565747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.566153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.566602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.566635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.567053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.567480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.567517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 EAL: No free 2048 kB hugepages reported on node 1 00:38:02.309 [2024-05-15 20:29:54.567825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.568262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.568291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.568719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.569078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.569108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.569524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.569951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.569983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.570405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.570857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.570889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.571352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.571859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.571888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.572336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.572775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.572803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.573247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.573741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.573849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.574348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.574822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.574854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.575286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.575858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.575965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.576557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.577131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.577172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.577653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.578081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.578110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.578536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.578950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.578981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.579418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.579859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.579888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.580358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.580707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.580741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.581175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.581602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.581634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.309 [2024-05-15 20:29:54.581929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.582369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.309 [2024-05-15 20:29:54.582398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.309 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.582879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.583306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.583350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.583746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.584166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.584195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.584618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.585041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.585071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.585506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.585871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.585901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.586336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.586738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.586768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.587070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.587462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.587494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.587812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.588129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.588162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.588563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.588989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.589018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.589421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.589860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.589889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.590310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.590760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.590790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.591214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.591628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.591658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.592084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.592507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.592538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.592982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.593402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.593433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.593859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.596134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.596201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.596676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.597116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.597147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.597561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.597862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.597891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.598338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.598745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.598777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.599198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.599636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.599669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.600008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.600368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.600399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.600841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.601145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.601183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.601499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.601924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.601954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.602395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.602850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.602882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.603341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.603641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.603669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.604093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.604522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.604552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.605001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.605710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.605751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.606259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.606674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.606708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.607132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.607561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.607592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.608021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.608564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.608670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.609220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.609689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.609723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.310 [2024-05-15 20:29:54.610048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.610465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.310 [2024-05-15 20:29:54.610497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.310 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.610902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.611305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.611356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.611790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.612213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.612243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.612554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.612989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.613020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.613443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.613866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.613896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.614348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.614795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.614838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.615251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.615681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.615717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.616197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.616629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.616662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.617087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.617509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.617540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.617955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.618376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.618407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.618802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.619212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.619240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.619642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.620064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.620092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.620517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.620818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.620847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.621299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.621770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.621801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.622230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.622650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.622680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.623119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.623523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.623554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.623817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.624235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.624263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.624390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:02.311 [2024-05-15 20:29:54.624734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.625161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.625190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.625564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.626032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.626062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.626501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.626929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.626959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.627389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.627820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.627849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.628302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.628767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.628796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.629227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.629661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.629692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.629977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.630399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.630430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.630881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.631242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.631271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.631704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.632144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.632174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.632496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.632957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.632986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.633426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.633853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.633882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.311 qpair failed and we were unable to recover it. 00:38:02.311 [2024-05-15 20:29:54.634323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.634776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.311 [2024-05-15 20:29:54.634805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.635258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.635660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.635692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.635998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.636439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.636471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.636910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.637152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.637182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.637475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.637931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.637962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.638400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.638832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.638861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.639289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.639754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.639784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.640218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.640650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.640680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.641103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.641539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.641569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.642018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.642446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.642475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.642949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.643375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.643407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.643720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.644121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.644150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.644517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.644963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.644992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.645431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.645868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.645898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.646333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.646769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.646798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.647237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.647664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.647694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.648106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.648531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.648563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.649003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.649252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.649281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.649606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.650079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.650109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.650547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.650847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.650881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.651158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.651599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.651629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.652099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.652488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.652518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.652831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.653255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.653285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.653603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.654031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.654060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.654489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.654795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.654824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.655311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.655645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.655675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.656097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.656525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.656555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.312 qpair failed and we were unable to recover it. 00:38:02.312 [2024-05-15 20:29:54.656993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.312 [2024-05-15 20:29:54.657421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.657453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.657722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.658144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.658180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.658615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.658980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.659009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.659429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.659868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.659897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.660342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.660743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.660772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.661129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.661556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.661587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.662023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.662450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.662481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.662792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.663218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.663249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.663665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.664123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.664153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.664575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.665006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.665036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.665331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.665623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.665651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.666122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.666646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.666765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.667311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.667826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.667859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.668285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.668747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.668778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.669098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.669634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.669740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.670250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.670681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.670716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.671031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.671424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.671456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.671875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.672335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.672367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.672802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.673201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.673231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.673661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.674086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.674116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.674535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.674967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.674996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.675427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.675832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.675861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.676234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.676645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.676676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.677098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.677536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.677567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.677969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.678308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.678351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.678654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.679090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.679120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.679560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.679926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.679957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.680377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.680697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.680725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.681162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.681587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.681618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.681992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.682415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.682446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.313 qpair failed and we were unable to recover it. 00:38:02.313 [2024-05-15 20:29:54.682909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.313 [2024-05-15 20:29:54.683352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.683383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.683831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.684139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.684169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.684589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.685010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.685040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.685425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.685809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.685839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.686150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.686565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.686596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.686975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.687426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.687458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.687962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.688389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.688419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.688853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.689277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.689307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.689730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.690026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.690056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.690490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.690916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.690945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.691345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.691843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.691873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.692303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.692763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.692792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.693171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.693617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.693649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.694067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.694461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.694491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.694748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.695170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.695200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.695644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.695944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.695977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.696422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.696887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.696915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.697230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.697729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.697759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.698199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.698610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.698639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.699072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.699380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.699411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.699841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.700271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.700302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.700762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.701185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.701215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.701635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.702066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.702103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.702531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.702948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.702978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.703421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.703851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.703879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.704296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.704700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.704730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.705162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.705568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.705600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.706024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.706408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.706439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.706828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.707255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.707284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.707756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.708021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.708049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.314 qpair failed and we were unable to recover it. 00:38:02.314 [2024-05-15 20:29:54.708362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.314 [2024-05-15 20:29:54.708829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.708859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.709292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.709770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.709800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.710235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.710593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.710624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.711049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.711462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.711493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.711950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.712256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.712285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.712575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.713001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.713032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.713463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.713890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.713919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.714346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.714781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.714812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.715177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.715612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.715645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.716064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.716462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.716493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.716934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.717365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.717395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.717817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.718248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.718278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.718607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.719033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.719063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.719509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.719927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.719957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.720112] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:02.315 [2024-05-15 20:29:54.720157] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:02.315 [2024-05-15 20:29:54.720167] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:02.315 [2024-05-15 20:29:54.720174] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:02.315 [2024-05-15 20:29:54.720180] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:02.315 [2024-05-15 20:29:54.720407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.720376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:38:02.315 [2024-05-15 20:29:54.720512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:38:02.315 [2024-05-15 20:29:54.720729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:38:02.315 [2024-05-15 20:29:54.720799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.720827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.720729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:38:02.315 [2024-05-15 20:29:54.721249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.721684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.721716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.722175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.722582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.722611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.723039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.723517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.723546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.723994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.724426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.724456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.724899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.725244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.725274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.725766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.726188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.726218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.726659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.726914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.726943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.727372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.727829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.727860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.728167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.728574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.728605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.729047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.729477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.729507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.729707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.730128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.730158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.730571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.732776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.732848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.315 [2024-05-15 20:29:54.733333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.733573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.315 [2024-05-15 20:29:54.733604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.315 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.733977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.734382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.734413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.734850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.735275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.735306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.735750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.736187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.736217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.736540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.737012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.737042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.737355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.737790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.737820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.738255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.738656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.738688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.739117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.739425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.739456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.739831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.740193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.740223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.740631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.741056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.741086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.741515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.741819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.741850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.742335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.742733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.742762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.743183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.743448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.743478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.743910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.744362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.744393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.744820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.745132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.745161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.745495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.745761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.745788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.746097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.746522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.746553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.746991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.747418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.747448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.747894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.748329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.748361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.748809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.749229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.749259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.749666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.750092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.750120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.750561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.750804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.750833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.751244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.751660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.751690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.752124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.752541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.752571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.752994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.753397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.753428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.753846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.754156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.754187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.754623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.754990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.755019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.755470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.755888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.316 [2024-05-15 20:29:54.755917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.316 qpair failed and we were unable to recover it. 00:38:02.316 [2024-05-15 20:29:54.756345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.756767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.756795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.757258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.757723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.757754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.757994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.758455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.758484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.758921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.759347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.759378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.759673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.760089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.760117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.760517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.760941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.760972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.761392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.761681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.761715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.762153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.762394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.762424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.762792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.763223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.763254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.763684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.764126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.764155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.764608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.765035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.765065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.765456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.765868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.765898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.766338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.766815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.766844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.767287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.767730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.767762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.768136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.768563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.768672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.769055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.769522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.769556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.769858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.770195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.770230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.770547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.770971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.771000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.771438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.771871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.771899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.772345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.772762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.772791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.773221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.773468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.773498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.773948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.774378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.774408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.774842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.775272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.775302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.775664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.776090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.776120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.776590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.777022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.777050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.777330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.777654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.777684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.778097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.778654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.778758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.779182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.779584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.779618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.779929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.780234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.780263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.780668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.781082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.781110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.781544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.781956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.781984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.782353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.782821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.782850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.317 qpair failed and we were unable to recover it. 00:38:02.317 [2024-05-15 20:29:54.783278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.317 [2024-05-15 20:29:54.783718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.783750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.784192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.784648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.784678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.785100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.785329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.785360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.785837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.786094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.786125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.786549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.786802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.786833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.787266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.787723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.787754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.788177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.788472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.788503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.788934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.789363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.789393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.789828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.790182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.790211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.790520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.790916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.790943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.791370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.791804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.791833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.792161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.792563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.792594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.792829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.793253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.793283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.793646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.794062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.794090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.794516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.794931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.794961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.795392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.795844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.795879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.796290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.796429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.796457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.796581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.797009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.797039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.797458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.797885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.797915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.798187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.798612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.798642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.799067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.799459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.799489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.799918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.800350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.800380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.800624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.801039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.801068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.801452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.801744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.801771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.802211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.802614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.802645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.318 [2024-05-15 20:29:54.803109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.803384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.318 [2024-05-15 20:29:54.803420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.318 qpair failed and we were unable to recover it. 00:38:02.592 [2024-05-15 20:29:54.803870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.804290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.804341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.592 qpair failed and we were unable to recover it. 00:38:02.592 [2024-05-15 20:29:54.804711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.805032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.805069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.592 qpair failed and we were unable to recover it. 00:38:02.592 [2024-05-15 20:29:54.805460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.805906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.805934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.592 qpair failed and we were unable to recover it. 00:38:02.592 [2024-05-15 20:29:54.806388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.806816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.806843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.592 qpair failed and we were unable to recover it. 00:38:02.592 [2024-05-15 20:29:54.807271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.807705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.807735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.592 qpair failed and we were unable to recover it. 00:38:02.592 [2024-05-15 20:29:54.808168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.808513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.808543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.592 qpair failed and we were unable to recover it. 00:38:02.592 [2024-05-15 20:29:54.808965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.809387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.809416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.592 qpair failed and we were unable to recover it. 00:38:02.592 [2024-05-15 20:29:54.809844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.810262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.810291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.592 qpair failed and we were unable to recover it. 00:38:02.592 [2024-05-15 20:29:54.810740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.811118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.811148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.592 qpair failed and we were unable to recover it. 00:38:02.592 [2024-05-15 20:29:54.811566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.811982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.812012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.592 qpair failed and we were unable to recover it. 00:38:02.592 [2024-05-15 20:29:54.812228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.812603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-05-15 20:29:54.812633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.592 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.813067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.813172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.813201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.813643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.814088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.814116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.814554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.814801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.814828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.815241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.815674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.815705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.816095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.816326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.816356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.816609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.817056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.817086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.817527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.817764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.817794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.818064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.818499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.818529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.818966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.819381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.819412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.819840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.820258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.820286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.820680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.821102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.821129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.821558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.821918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.821948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.822380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.822805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.822833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.823260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.823658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.823689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.823827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.824213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.824241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.824656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.825111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.825142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.825406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.825687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.825717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.826149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.826563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.826594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.827017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.827333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.827366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.827630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.827867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.827897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.828345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.828778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.828809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.829242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.829638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.829669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.593 qpair failed and we were unable to recover it. 00:38:02.593 [2024-05-15 20:29:54.830106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.593 [2024-05-15 20:29:54.830530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.830561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.830844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.831269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.831300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.831751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.832167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.832198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.832644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.833067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.833096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.833358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.833663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.833698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.833971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.834329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.834361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.834789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.835086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.835115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.835541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.835945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.835982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.836388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.836659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.836688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.837116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.837539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.837570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.837841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.838149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.838179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.838585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.838876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.838911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.839354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.839785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.839816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.840086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.840502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.840533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.840839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.841257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.841285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.841708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.842127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.842157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.842387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.842837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.842865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.843295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.843809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.843840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.844280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.844543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.844575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.844991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.845411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.845442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.845929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.846345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.846376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.846802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.847255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.847283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.847719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.848119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.848148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.848562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.848986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.849015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.849422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.849729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.849759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.850195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.850612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.850642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.850919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.851345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.594 [2024-05-15 20:29:54.851377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.594 qpair failed and we were unable to recover it. 00:38:02.594 [2024-05-15 20:29:54.851864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.852298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.852339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.852821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.853238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.853267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.853632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.854091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.854121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.854520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.854916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.854946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.855261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.855682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.855713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.856151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.856347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.856376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.856810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.857236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.857265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.857712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.858144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.858174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.858625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.858994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.859024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.859454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.859890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.859920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.860171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.860399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.860430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.860827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.861251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.861282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.861597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.861971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.862000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.862431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.862866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.862896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.863174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.863580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.863612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.864013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.864450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.864481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.864907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.865335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.865365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.865636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.866049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.866079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.866524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.866951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.866980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.867192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.867623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.867652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.868071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.868485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.868514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.868962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.869389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.869419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.869848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.870272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.870302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.870817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.871276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.871306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.871682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.872136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.872164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.872351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.872778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.872807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.873088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.873327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.873358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.873626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.874088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.874118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.874363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.874783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.874812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.595 [2024-05-15 20:29:54.875010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.875300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.595 [2024-05-15 20:29:54.875341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.595 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.875613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.876030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.876059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.876357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.876614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.876650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.876968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.877425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.877454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.877898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.878340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.878370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.878615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.878988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.879016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.879453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.879897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.879925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.880353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.880833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.880864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.881269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.881737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.881769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.881999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.882433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.882463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.882745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.883158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.883188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.883636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.884103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.884131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.884576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.885002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.885031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.885488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.885856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.885886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.886282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.886546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.886575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.886800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.887153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.887183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.887616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.888048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.888078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.888354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.888674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.888704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.889151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.889479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.889508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.889947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.890250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.890280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.890720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.891146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.891176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.891587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.891994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.892024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.892278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.892582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.892614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.893046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.893473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.893505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.893942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.894368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.894398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.894831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.895259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.895287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.895700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.896130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.896159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.896613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.897039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.897067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.897509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.897797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.897826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.898265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.898681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.898711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.596 qpair failed and we were unable to recover it. 00:38:02.596 [2024-05-15 20:29:54.899157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.596 [2024-05-15 20:29:54.899568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.899599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.900019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.900398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.900428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.900833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.901133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.901164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.901374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.901699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.901729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.902168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.902613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.902642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.903065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.903167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.903195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.903674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.904106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.904134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.904564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.904995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.905024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.905426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.905662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.905690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.906155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.906461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.906491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.906734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.907152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.907181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.907620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.908045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.908075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.908526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.908962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.908990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.909264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.909522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.909559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.909962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.910382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.910413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.910842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.911138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.911174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.911587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.912006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.912035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.912465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.912900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.912928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.913374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.913807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.913836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.914270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.914734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.914764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.915195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.915627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.915659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.916086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.916331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.916361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.916835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.917267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.917296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.597 [2024-05-15 20:29:54.917721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.917986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.597 [2024-05-15 20:29:54.918026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.597 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.918472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.918897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.918926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.919359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.919840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.919869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.920339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.920778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.920807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.921233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.921539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.921568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.921763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.922238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.922269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.922522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.922821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.922852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.923294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.923725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.923757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.924025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.924467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.924497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.924937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.925372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.925402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.925709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.926133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.926162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.926605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.927031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.927060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.927511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.927947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.927975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.928411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.928683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.928713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.929132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.929562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.929592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.930030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.930477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.930508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.930954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.931196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.931224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.931649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.932076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.932104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.932540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.932966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.932995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.933439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.933666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.933693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.934005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.934431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.934461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.934906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.935338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.935368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.935798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.936022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.936050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.936362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.936599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.936627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.936851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.937118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.937148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.937591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.938021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.938050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.938465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.938889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.938919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.939359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.939595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.939623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.940044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.940511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.940541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.940960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.941382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.941412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.598 qpair failed and we were unable to recover it. 00:38:02.598 [2024-05-15 20:29:54.941839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.942270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.598 [2024-05-15 20:29:54.942300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.942741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.943170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.943199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.943646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.944071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.944100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.944542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.944914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.944944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.945361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.945608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.945635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.946043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.946443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.946472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.946758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.947069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.947101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.947377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.947863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.947892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.948356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.948783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.948814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.949259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.949621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.949652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.950068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.950310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.950351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.950828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.951216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.951251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.951693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.952123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.952154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.952583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.952950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.952979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.953412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.953730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.953758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.954077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.954349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.954379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.954652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.955071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.955101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.955515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.955950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.955980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.956209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.956613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.956642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.957092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.957472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.957502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.957967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.958406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.958435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.958884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.959323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.959354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.959812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.960244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.960274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.960735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.961160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.961189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.961442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.961752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.961782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.962031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.962450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.962480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.962912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.963337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.963367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.963699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.964128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.964158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.964571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.965004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.965033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.965445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.965906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.965936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.599 qpair failed and we were unable to recover it. 00:38:02.599 [2024-05-15 20:29:54.966360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.599 [2024-05-15 20:29:54.966819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.966849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.967331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.967773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.967803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.968235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.968677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.968709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.968949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.969210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.969240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.969685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.969973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.970003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.970487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.970875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.970905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.971281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.971737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.971768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.972051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.972271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.972302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.972616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.973043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.973073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.973519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.973747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.973776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.974193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.974504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.974535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.974925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.975361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.975391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.975825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.976254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.976283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.976709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.977152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.977180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.977634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.977845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.977872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.978305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.978760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.978790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.979218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.979512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.979543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.980003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.980370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.980401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.980844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.980964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.980989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.981240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.981663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.981694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.982206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.982604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.982634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.983051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.983248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.983276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.983721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.984029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.984060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.984367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.984849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.984880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.985309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.985767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.985797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.986242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.986479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.986510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.986879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.987291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.987331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.987694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.988120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.988149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.988408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.988846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.988875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.989330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.989818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.989847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.600 qpair failed and we were unable to recover it. 00:38:02.600 [2024-05-15 20:29:54.990120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.600 [2024-05-15 20:29:54.990531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.990562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:54.990839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.991079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.991107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:54.991522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.991953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.991989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:54.992446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.992904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.992936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:54.993390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.993761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.993791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:54.994263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.994744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.994776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:54.995204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.995471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.995502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:54.995906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.996335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.996366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:54.996807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.997238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.997268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:54.997706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.998139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.998171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:54.998635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.999063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.999093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:54.999503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.999931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:54.999960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:55.000197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.000692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.000723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:55.001161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.001563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.001593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:55.002026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.002334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.002365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:55.002738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.003202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.003232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:55.003659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.004089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.004119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:55.004570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.004794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.004821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:55.005253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.005473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.005503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:55.005976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.006198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.006225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:55.006643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.007070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.007099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:55.007344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.007809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.007838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:55.008269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.008664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.008696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:55.009133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.009357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.009386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:55.009808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.010232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.010261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.601 [2024-05-15 20:29:55.010692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.010959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.601 [2024-05-15 20:29:55.010988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.601 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.011416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.011872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.011903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.012343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.012827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.012856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.013332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.013759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.013788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.014217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.014642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.014673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.015052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.015448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.015479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.015909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.016353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.016384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.016619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.016885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.016914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.017154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.017449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.017479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.017734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.018138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.018167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.018617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.019046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.019074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.019508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.019735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.019762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.020173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.020411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.020441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.020880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.021303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.021344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.021821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.022248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.022276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.022689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.023121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.023150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.023276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.023626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.023656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.023882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.024174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.024204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.024432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.024727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.024765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.025225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.025660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.025692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.026142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.026563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.026593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.027025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.027439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.027469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.027923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.028346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.028377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.028610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.029055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.029084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.029270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.029500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.029530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.029797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.030191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.030221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.030605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.031029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.031059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.031343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.031656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.031686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.032132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.032563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.032603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.032833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.033260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.602 [2024-05-15 20:29:55.033289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.602 qpair failed and we were unable to recover it. 00:38:02.602 [2024-05-15 20:29:55.033750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.034165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.034194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.034463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.034937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.034966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.035195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.035611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.035641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.035998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.036459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.036488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.036884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.037177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.037206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.037645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.038071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.038100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.038539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.038963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.038992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.039235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.039547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.039577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.040024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.040447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.040477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.040916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.041355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.041385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.041697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.042146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.042176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.042453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.042845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.042873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.043324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.043764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.043794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.044225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.044651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.044683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.044998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.045430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.045460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.045743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.046176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.046205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.046624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.047048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.047076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.047383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.047823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.047851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.048298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.048765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.048794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.049223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.049649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.049680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.050109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.050356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.050387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.050817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.051125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.051155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.051603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.051910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.051939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.052365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.052805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.052834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.053086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.053508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.053539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.053969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.054266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.054296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.054612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.055044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.055074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.055503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.055871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.055902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.056335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.056650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.056682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.603 [2024-05-15 20:29:55.057107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.057540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.603 [2024-05-15 20:29:55.057571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.603 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.058041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.058281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.058308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.058692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.059121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.059149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.059599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.060024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.060053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.060650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.061227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.061268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.061730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.062200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.062230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.062738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.063164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.063194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.063571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.063999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.064027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.064461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.064789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.064825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.065217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.065486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.065519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.065754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.066179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.066221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.066624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.067045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.067075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.067502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.067928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.067958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.068381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.068819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.068849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.069231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.069686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.069716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.070160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.070565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.070594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.071034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.071461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.071492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.071963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.072387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.072418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.072646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.072938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.072969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.073420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.073662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.073689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.073917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.074351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.074381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.074795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.075222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.075252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.075579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.076039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.076068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.076492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.076911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.076940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.077372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.077812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.077841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.078248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.078662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.078692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.078940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.079360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.079389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.079639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.079927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.079957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.080221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.080642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.604 [2024-05-15 20:29:55.080673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.604 qpair failed and we were unable to recover it. 00:38:02.604 [2024-05-15 20:29:55.081110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.081540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.081573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.081874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.082342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.082374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.082849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.083156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.083186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.083622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.083931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.083962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.084235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.084649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.084679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.085064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.085480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.085510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.085762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.086180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.086208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.086644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.086866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.086893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.087200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.087625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.087655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.087909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.088174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.088202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.088676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.088930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.088962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.089280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.089731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.089761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.090206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.090619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.090650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.091010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.091299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.091339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.091760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.092142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.092172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.092577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.093008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.093036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.093482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.093708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.093735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.094152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.094456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.094486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.094833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.095054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.095082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.095499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.095928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.095958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.096286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.096735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.096766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.097196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.097472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.097503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.097873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.098296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.098341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.098787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.099215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.099244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.099651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.100072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.100102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.100527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.100639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.100667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.874 [2024-05-15 20:29:55.101057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.101485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.874 [2024-05-15 20:29:55.101516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.874 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.101797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.102262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.102290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.102716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.103143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.103172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.103621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.104063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.104093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.104505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.104730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.104758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.105179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.105650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.105680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.106120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.106350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.106386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.106702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.107071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.107100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.107542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.107970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.107999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.108430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.108663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.108690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.109009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.109437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.109467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.109762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.110209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.110239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.110693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.111120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.111149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.111561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.111997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.112026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.112286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.112696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.112725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.113160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.113565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.113595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.114032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.114417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.114446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.114769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.115176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.115205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.115484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.115947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.115976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.116253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.116664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.116695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.117139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.117392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.117421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.117716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.118167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.118197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.118451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.118880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.118910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.119198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.119459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.119489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.119886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.120344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.120376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.120847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.121273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.121303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.121587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.121886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.121916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.122340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.122772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.122802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.123235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.123663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.123694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.124125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.124555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.124585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.125025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.125446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.125477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.125851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.126106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.126136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.126579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.127010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.875 [2024-05-15 20:29:55.127040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.875 qpair failed and we were unable to recover it. 00:38:02.875 [2024-05-15 20:29:55.127456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.127883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.127914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.128296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.128737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.128769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.129266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.129735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.129767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.130082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.130529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.130559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.130836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.130959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.130987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.131445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.131828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.131856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.132280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.132559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.132590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.133035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.133461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.133492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.133922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.134360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.134391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.134845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.134959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.134985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.135421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.135673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.135700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.136142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.136568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.136598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.137010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.137249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.137278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.137728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.138030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.138060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.138380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.138803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.138838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.139332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.139651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.139680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.140111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.140352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.140382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.140690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.140938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.140965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.141389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.141641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.141668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.141962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.142268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.142297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.142725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.142961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.142988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.143298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.143764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.143793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.144049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.144462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.144492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.144946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.145373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.145403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.145652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.146013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.146049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.146485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.146795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.146824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.147227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.147656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.147687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.148002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.148295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.148343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.148794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.149174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.149205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.149621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.149859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.149888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.150307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.150759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.150790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.876 [2024-05-15 20:29:55.151223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.151656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.876 [2024-05-15 20:29:55.151687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.876 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.152101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.152542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.152572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.153004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.153428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.153458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.153901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.154353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.154385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.154817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.155242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.155271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.155678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.155983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.156012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.156143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.156555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.156585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.157061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.157462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.157493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.157957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.158185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.158213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.158521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.158947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.158977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.159295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.159759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.159790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.160206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.160650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.160680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.160955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.161253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.161282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.161713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.162106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.162135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.162574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.163002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.163032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.163393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.163852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.163881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.164333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.164793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.164822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.165247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.165672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.165702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.165954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.166439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.166470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.166902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.167307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.167348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.167583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.167997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.168026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.168456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.168885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.168914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.169368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.169644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.169671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.170098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.170335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.170365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.170644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.171105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.171133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.171563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.171970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.171999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.172253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.172679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.172709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.173132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.173556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.173584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.174015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.174460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.174490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.174898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.175333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.175364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.175770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.176238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.176268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.176702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.177125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.177155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.177565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.177996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.178026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.877 qpair failed and we were unable to recover it. 00:38:02.877 [2024-05-15 20:29:55.178458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.877 [2024-05-15 20:29:55.178887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.178915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.179330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.179698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.179735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.180149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.180664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.180770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.181114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.181550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.181583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.182006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.182219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.182250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.182685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.183111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.183141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.183657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.183908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.183937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.184408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.184719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.184753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.185173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.185579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.185610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.186054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.186282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.186309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.186762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.187176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.187204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.187620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.188047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.188077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.188364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.188795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.188824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.189258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.189561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.189591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.190015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.190444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.190474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.190948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.191169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.191198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.191578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.191881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.191909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.192190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.192619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.192648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.193075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.193495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.193526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.193895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.194331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.194366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.194605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.195007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.195034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.195456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.195888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.195914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.196361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.196795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.196823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.197093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.197334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.197363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.197598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.198059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.198087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.198520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.199003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.199030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.878 [2024-05-15 20:29:55.199449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.199922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.878 [2024-05-15 20:29:55.199950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.878 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.200375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.200825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.200852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.201291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.201698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.201725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.202153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.202561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.202589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.203034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.203489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.203518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.203941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.204380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.204409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.204683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.204990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.205016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.205481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.205923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.205950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.206389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.206845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.206871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.207292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.207752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.207779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.208219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.208514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.208543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.208982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.209425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.209453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.209729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.210046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.210072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.210570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.210948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.210974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.211300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.211758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.211785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.212065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.212502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.212531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.213003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.213382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.213411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.213683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.214096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.214123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.214569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.215016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.215043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.215293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.215741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.215768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.216213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.216623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.216651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.217085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.217533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.217560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.218005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.218239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.218265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.218709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.219155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.219181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.219613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.220054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.220081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.220528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.220973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.221000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.221454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.221926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.221965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.222454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.222716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.222742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.223051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.223554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.223585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.224029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.224345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.224382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.224871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.225310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.225351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.225791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.226093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.226122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.226565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.227101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.227139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.879 qpair failed and we were unable to recover it. 00:38:02.879 [2024-05-15 20:29:55.227451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.879 [2024-05-15 20:29:55.227872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.227899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.228420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.228663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.228691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.229138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.229581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.229609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.229940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.230094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.230125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.230566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.231014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.231041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.231350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.231787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.231814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.232271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.232723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.232752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.233197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.233648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.233677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.233952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.234383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.234412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.234679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.234991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.235024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.235353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.235646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.235675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.236124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.236567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.236595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.237051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.237496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.237525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.237976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.238223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.238252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.238525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.238921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.238948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.239231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.239511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.239539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.239974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.240418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.240446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.240729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.241197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.241223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.241649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.241913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.241940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.242189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.242594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.242622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.243139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.243538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.243566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.243845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.244077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.244103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.244517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.244969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.244996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.245454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.245693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.245719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.246135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.246447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.246475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.246772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.247190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.247216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.247483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.247781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.247808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.248030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.248427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.248457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.248899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.249333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.249361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.249678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.250058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.250084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.250501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.250935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.250962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.251353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.251822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.251849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.252087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.252498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.880 [2024-05-15 20:29:55.252526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.880 qpair failed and we were unable to recover it. 00:38:02.880 [2024-05-15 20:29:55.252970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.253275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.253302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.253548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.254016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.254049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.254293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.254704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.254734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.255160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.255265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.255291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.255743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.256182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.256208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.256377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.256809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.256837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.257284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.257730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.257758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.258197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.258461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.258490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.258912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.259154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.259181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.259603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.260049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.260076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.260358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.260861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.260887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.261307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.261743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.261777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.262219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.262631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.262658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.263096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.263505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.263532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.263857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.264281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.264307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.264682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.265160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.265188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.265630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.266073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.266100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.266355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.266622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.266650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.267039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.267498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.267526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.267947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.268423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.268450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.268703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.269086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.269112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.269572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.270014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.270041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.270532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.270930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.270956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.271389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.271726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.271752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.272201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.272614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.272642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.273037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.273478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.273509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.273754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.274157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.274184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.274615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.274921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.274948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.275422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.275863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.275890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.276334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.276743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.276770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.277213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.277479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.277507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.278011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.278452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.278480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.881 qpair failed and we were unable to recover it. 00:38:02.881 [2024-05-15 20:29:55.278742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.881 [2024-05-15 20:29:55.279016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.279044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.279541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.279777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.279803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.280227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.280636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.280664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.281090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.281547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.281575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.282020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.282288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.282326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.282762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.283112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.283139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.283421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.283867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.283893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.284311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.284757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.284783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.285238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.285520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.285548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.285791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.286215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.286241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.286358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.286757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.286784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.287203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.287635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.287662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.288110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.288387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.288415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.288875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.289348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.289377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.289692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.290136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.290163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.290618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.291023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.291050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.291353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.291808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.291835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.292284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.292720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.292749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.293177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.293483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.293511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.293939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.294391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.294419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.294684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.294928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.294961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.295427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.295647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.295673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.296097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.296356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.296410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.296708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.297151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.297178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.297611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.297918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.297945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.298403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.298673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.298699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.299137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.299545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.299572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.300004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.300454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.300481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.882 qpair failed and we were unable to recover it. 00:38:02.882 [2024-05-15 20:29:55.300927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.301372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.882 [2024-05-15 20:29:55.301400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.301847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.302279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.302306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.302633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.303080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.303109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.303431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.303888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.303914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.304356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.304665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.304691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.305132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.305572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.305600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.305844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.306135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.306161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.306632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.306992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.307018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.307464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.307921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.307948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.308184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.308514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.308542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.308958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.309182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.309209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.309445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.309839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.309866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.310307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.310597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.310625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.311062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.311472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.311500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.311948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.312052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.312078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.312496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.312934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.312960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.313195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.313616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.313644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.314092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.314534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.314562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.315001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.315274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.315301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.315753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.316075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.316101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.316491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.316938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.316965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.317247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.317672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.317700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.318154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.318453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.318481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.318965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.319273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.319299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.319495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.319938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.319965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.320289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.320575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.320603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.321046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.321354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.321382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.321795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.322240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.322266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.322700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.323142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.323170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.323555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.323883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.323910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.324354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.324781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.324808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.325232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.325713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.325742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.326208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.326436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.883 [2024-05-15 20:29:55.326463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.883 qpair failed and we were unable to recover it. 00:38:02.883 [2024-05-15 20:29:55.326750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.327074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.327105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.327524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.327934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.327961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.328390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.328799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.328827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.329277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.329730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.329759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.330153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.330391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.330419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.330743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.331133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.331160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.331450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.331744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.331771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.332035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.332295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.332339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.332633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.333072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.333100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.333496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.333970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.333996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.334251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.334600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.334636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.334904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.335189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.335219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.335538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.335832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.335859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.336269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.336640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.336669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.337084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.337519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.337547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.337789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.338036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.338063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.338286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.338621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.338649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.339095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.339537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.339565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.340007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.340238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.340264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.340720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.341024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.341051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.341379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.341616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.341644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.341947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.342180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.342208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.342619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.342887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.342914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.343360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.343845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.343872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.344120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.344364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.344395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.344832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.345270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.345297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.345499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.345745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.345771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.346212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.346632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.346660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.347118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.347521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.347550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.348031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.348472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.348501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.348933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.349189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.349216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.349594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.350037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.350064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.350392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.350859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.350886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.884 qpair failed and we were unable to recover it. 00:38:02.884 [2024-05-15 20:29:55.351335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.884 [2024-05-15 20:29:55.351858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.351886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.352335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.352776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.352802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.353291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.353558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.353586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.354054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.354374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.354412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.354850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.355292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.355331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.355580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.356003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.356031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.356432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.356720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.356746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.357198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.357644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.357672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.358116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.358532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.358561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.358969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.359428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.359456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.359858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.360093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.360120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.360586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.361020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.361047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.361334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.361751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.361778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.362207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.362610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.362638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.362938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.363363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.363391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.363713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.364128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.364154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.364620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.365062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.365090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.365528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.365970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.365997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.366334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.366610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.366642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.366918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.367342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.885 [2024-05-15 20:29:55.367371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:02.885 qpair failed and we were unable to recover it. 00:38:02.885 [2024-05-15 20:29:55.367847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.368295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.368339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.368747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.369184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.369212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.369466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.369934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.369963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.370212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.370653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.370682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.371119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.371531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.371559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.371987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.372362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.372390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.372694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.373135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.373162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.373596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.374042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.374068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.374518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.374827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.374860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.375332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.375779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.375806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.376254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.376690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.376719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.377158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.377603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.377631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.377877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.378294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.378332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.378749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.379204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.379231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.379644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.379889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.379916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.380332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.380815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.380842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.381258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.381700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.381730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.382160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.382714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.382817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.383338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.383701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.383738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.153 [2024-05-15 20:29:55.384201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.384452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.153 [2024-05-15 20:29:55.384481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.153 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.384793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.385169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.385196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.385702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.386018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.386045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.386497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.386915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.386943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.387396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.387819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.387846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.388125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.388567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.388595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.389016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.389455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.389484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.389932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.390182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.390209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.390699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.391141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.391167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.391595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.392038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.392065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.392500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.392814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.392841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.393095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.393539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.393568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.394019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.394262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.394287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.394740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.395132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.395158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.395675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.396104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.396131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.396572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.396873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.396899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.397345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.397669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.397696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.398010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.398327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.398355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.398630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.398967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.398993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.399445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.399855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.399882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.400248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.400710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.400737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.401133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.401552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.401580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.401751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.402228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.402255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.402369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.402821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.402848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.403299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.403746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.403773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.404016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.404447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.404475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.404915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.405356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.405384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.405809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.406065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.406091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.406508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.406957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.406983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.154 qpair failed and we were unable to recover it. 00:38:03.154 [2024-05-15 20:29:55.407234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.407338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.154 [2024-05-15 20:29:55.407365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.407670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.408121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.408153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.408436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.408863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.408889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.409176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.409620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.409648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.410071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.410447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.410475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.410910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.411352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.411379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.411849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.412297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.412335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.412768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.413221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.413248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.413682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.413931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.413957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.414206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.414615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.414642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.415137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.415599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.415626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.416092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.416541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.416568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.417018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.417429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.417457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.417899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.418342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.418371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.418808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.419060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.419086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.419545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.419862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.419888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.420284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.420746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.420774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.421204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.421704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.421732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.422060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.422526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.422554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.422994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.423248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.423274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.423543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.423962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.423988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.424283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.424717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.424744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.425056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.425503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.425531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.425855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.426173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.426199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.426625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.426976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.427002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.427432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.427879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.427905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.428358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.428631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.428657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.429113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.429552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.429580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.430025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.430432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 [2024-05-15 20:29:55.430460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.155 qpair failed and we were unable to recover it. 00:38:03.155 [2024-05-15 20:29:55.430897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.155 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:03.155 [2024-05-15 20:29:55.431299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:38:03.156 [2024-05-15 20:29:55.431337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.431738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:03.156 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:03.156 [2024-05-15 20:29:55.432048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.432075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:03.156 [2024-05-15 20:29:55.432339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.432806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.432833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.433277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.433736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.433766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.434194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.434660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.434688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.435131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.435363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.435390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.435903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.436304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.436342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.436662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.437101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.437134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.437647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.437938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.437964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.438391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.438827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.438855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.439330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.439771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.439799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.440199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.440617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.440649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.441107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.441521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.441551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.441978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.442420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.442450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.442903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.443346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.443376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.443759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.444224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.444250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.444615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.444854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.444880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.445378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.445638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.445665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.446125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.446514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.446543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.447056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.447465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.447494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.447732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.448186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.448213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.448649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.449111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.449137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.449557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.449999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.450028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.450289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.450715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.450744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.451177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.451667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.451695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.451825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.452210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.452240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.452680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.453116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.453144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.453560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.453975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.156 [2024-05-15 20:29:55.454001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.156 qpair failed and we were unable to recover it. 00:38:03.156 [2024-05-15 20:29:55.454441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.454865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.454893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.455035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.455488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.455517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.455959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.456368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.456396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.456858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.457302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.457355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.457832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.458258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.458285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.458720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.459126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.459153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.459563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.460001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.460029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.460429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.460840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.460866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.461111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.461409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.461436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.461901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.462213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.462241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.462666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.463109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.463136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.463247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.463644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.463673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.463820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.464259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.464284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.464665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.465115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.465141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.465556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.465997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.466038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.466474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.466917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.466944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.467384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.467819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.467847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.468092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.468380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.468409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.468854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.469289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.469336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.469774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.470235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.470264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.470516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.470947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.470985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.471412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.471867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.471896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.157 qpair failed and we were unable to recover it. 00:38:03.157 [2024-05-15 20:29:55.472336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.472719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.157 [2024-05-15 20:29:55.472750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.158 qpair failed and we were unable to recover it. 00:38:03.158 [2024-05-15 20:29:55.473150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.158 [2024-05-15 20:29:55.473378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.158 [2024-05-15 20:29:55.473407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.158 qpair failed and we were unable to recover it. 00:38:03.158 [2024-05-15 20:29:55.473860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.158 [2024-05-15 20:29:55.474288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.158 [2024-05-15 20:29:55.474344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.158 qpair failed and we were unable to recover it. 00:38:03.158 [2024-05-15 20:29:55.474802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.158 [2024-05-15 20:29:55.475229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.158 [2024-05-15 20:29:55.475259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.158 qpair failed and we were unable to recover it. 00:38:03.159 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:03.159 [2024-05-15 20:29:55.475711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.475818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.475844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:03.159 [2024-05-15 20:29:55.476254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:03.159 [2024-05-15 20:29:55.476657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.476688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:03.159 [2024-05-15 20:29:55.477061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.477455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.477487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.477912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.478348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.478378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.478832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.479262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.479289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.479525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.479951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.479980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.480411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.480843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.480871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.481227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.481657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.481694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.482087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.482510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.482540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.482993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.483294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.483340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.483815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.484242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.484271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.484705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.485166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.485195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.485454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.485896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.485924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.486177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.486648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.486677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.487117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.487508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.487537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.487777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.488205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.488233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.488645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.489076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.489104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.489528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.489961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.489992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.490414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.490842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.490871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.491297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.491715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.491746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.492061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.492507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.492536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.492820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.493248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.493277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.493524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.493778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.493808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.494219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.494648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.494678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.495126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.495539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.495569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.495848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.496216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.496246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.496699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.497014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.497042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.497426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.497875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.497904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.159 qpair failed and we were unable to recover it. 00:38:03.159 [2024-05-15 20:29:55.498215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.159 [2024-05-15 20:29:55.498655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.498685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.499114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.499512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.499541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 Malloc0 00:38:03.160 [2024-05-15 20:29:55.499784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.500215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.500245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.160 [2024-05-15 20:29:55.500699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:03.160 [2024-05-15 20:29:55.501129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.501159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:03.160 [2024-05-15 20:29:55.501398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:03.160 [2024-05-15 20:29:55.501773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.501803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.502275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.502761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.502793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.503270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.503502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.503533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.503980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.504271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.504301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.504772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.505076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.505108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.505355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.505664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.505695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.505974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.506218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.506246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.506485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.506944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.506972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.507019] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:03.160 [2024-05-15 20:29:55.507423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.507890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.507919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.508347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.508824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.508852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.509169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.509580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.509610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.510043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.510464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.510494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.510923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.511286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.511325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.511781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.512085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.512113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.512580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.512883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.512912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.513344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.513651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.513679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.514026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.514343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.514374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.514813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.515276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.515306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.515709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.160 [2024-05-15 20:29:55.516129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.516158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:03.160 [2024-05-15 20:29:55.516572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:03.160 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:03.160 [2024-05-15 20:29:55.517002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.517031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.517475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.517942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.517971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.518402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.518825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.518854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.519162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.519549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.519579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.160 [2024-05-15 20:29:55.519761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.520208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.160 [2024-05-15 20:29:55.520238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.160 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.520687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.521113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.521141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.521563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.521923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.521951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.522407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.522640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.522671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.523098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.523531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.523560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.523999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.524427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.524457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.524705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.524939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.524967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.525404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.525843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.525873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.526277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.526490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.526519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.526807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.527239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.527268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.527729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.161 [2024-05-15 20:29:55.528150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.528180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:03.161 [2024-05-15 20:29:55.528512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:03.161 [2024-05-15 20:29:55.528942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.528970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.529404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.529873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.529902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.530337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.530754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.530782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.531090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.531340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.531369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.531696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.531990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.532019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.532526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.532942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.532970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.533211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.533633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.533663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.534103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.534489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.534518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.534775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.535241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.535269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.535603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.536032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.536062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.536388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.536709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.536737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.536995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.537464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.537496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.537938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.538337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.538367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.538806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.539227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.539256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.539684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.161 [2024-05-15 20:29:55.540122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.540151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.161 [2024-05-15 20:29:55.540490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:03.161 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:03.161 [2024-05-15 20:29:55.540939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.161 [2024-05-15 20:29:55.540967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.161 qpair failed and we were unable to recover it. 00:38:03.162 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:03.162 [2024-05-15 20:29:55.541430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.541876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.541904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 [2024-05-15 20:29:55.542280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.542760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.542789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 [2024-05-15 20:29:55.543056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.543485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.543514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 [2024-05-15 20:29:55.543782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.544121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.544150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 [2024-05-15 20:29:55.544598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.545059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.545087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 [2024-05-15 20:29:55.545515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.545803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.545835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 [2024-05-15 20:29:55.546146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.546565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.546596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 [2024-05-15 20:29:55.546720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.546880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.546908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8d520 with addr=10.0.0.2, port=4420 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 [2024-05-15 20:29:55.547082] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:38:03.162 [2024-05-15 20:29:55.547348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.162 [2024-05-15 20:29:55.547416] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:03.162 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.162 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:03.162 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:03.162 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:03.162 [2024-05-15 20:29:55.554367] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:38:03.162 [2024-05-15 20:29:55.554488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8d520 (107): Transport endpoint is not connected 00:38:03.162 [2024-05-15 20:29:55.554598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 [2024-05-15 20:29:55.558284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.162 [2024-05-15 20:29:55.558523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.162 [2024-05-15 20:29:55.558586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.162 [2024-05-15 20:29:55.558611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.162 [2024-05-15 20:29:55.558632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.162 [2024-05-15 20:29:55.558682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.162 20:29:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 325386 00:38:03.162 [2024-05-15 20:29:55.568061] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.162 [2024-05-15 20:29:55.568202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.162 [2024-05-15 20:29:55.568245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.162 [2024-05-15 20:29:55.568262] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.162 [2024-05-15 20:29:55.568276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.162 [2024-05-15 20:29:55.568311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 [2024-05-15 20:29:55.578072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.162 [2024-05-15 20:29:55.578190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.162 [2024-05-15 20:29:55.578224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.162 [2024-05-15 20:29:55.578237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.162 [2024-05-15 20:29:55.578247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.162 [2024-05-15 20:29:55.578275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 [2024-05-15 20:29:55.588009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.162 [2024-05-15 20:29:55.588112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.162 [2024-05-15 20:29:55.588141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.162 [2024-05-15 20:29:55.588150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.162 [2024-05-15 20:29:55.588159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.162 [2024-05-15 20:29:55.588180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 [2024-05-15 20:29:55.598058] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.162 [2024-05-15 20:29:55.598146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.162 [2024-05-15 20:29:55.598181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.162 [2024-05-15 20:29:55.598192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.162 [2024-05-15 20:29:55.598199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.162 [2024-05-15 20:29:55.598220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 [2024-05-15 20:29:55.608055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.162 [2024-05-15 20:29:55.608165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.162 [2024-05-15 20:29:55.608197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.162 [2024-05-15 20:29:55.608207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.162 [2024-05-15 20:29:55.608214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.162 [2024-05-15 20:29:55.608236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 [2024-05-15 20:29:55.617977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.162 [2024-05-15 20:29:55.618068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.162 [2024-05-15 20:29:55.618097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.162 [2024-05-15 20:29:55.618106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.162 [2024-05-15 20:29:55.618112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.162 [2024-05-15 20:29:55.618133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.162 qpair failed and we were unable to recover it. 00:38:03.162 [2024-05-15 20:29:55.628119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.162 [2024-05-15 20:29:55.628226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.163 [2024-05-15 20:29:55.628254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.163 [2024-05-15 20:29:55.628263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.163 [2024-05-15 20:29:55.628271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.163 [2024-05-15 20:29:55.628292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.163 qpair failed and we were unable to recover it. 00:38:03.163 [2024-05-15 20:29:55.638179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.163 [2024-05-15 20:29:55.638278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.163 [2024-05-15 20:29:55.638306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.163 [2024-05-15 20:29:55.638323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.163 [2024-05-15 20:29:55.638331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.163 [2024-05-15 20:29:55.638359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.163 qpair failed and we were unable to recover it. 00:38:03.163 [2024-05-15 20:29:55.648251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.163 [2024-05-15 20:29:55.648355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.163 [2024-05-15 20:29:55.648384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.163 [2024-05-15 20:29:55.648394] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.163 [2024-05-15 20:29:55.648401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.427 [2024-05-15 20:29:55.648422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.427 qpair failed and we were unable to recover it. 00:38:03.427 [2024-05-15 20:29:55.658227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.427 [2024-05-15 20:29:55.658324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.427 [2024-05-15 20:29:55.658353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.427 [2024-05-15 20:29:55.658363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.427 [2024-05-15 20:29:55.658370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.427 [2024-05-15 20:29:55.658391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.427 qpair failed and we were unable to recover it. 00:38:03.427 [2024-05-15 20:29:55.668195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.427 [2024-05-15 20:29:55.668296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.427 [2024-05-15 20:29:55.668340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.427 [2024-05-15 20:29:55.668349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.427 [2024-05-15 20:29:55.668357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.427 [2024-05-15 20:29:55.668379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.427 qpair failed and we were unable to recover it. 00:38:03.427 [2024-05-15 20:29:55.678248] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.427 [2024-05-15 20:29:55.678341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.427 [2024-05-15 20:29:55.678370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.427 [2024-05-15 20:29:55.678381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.427 [2024-05-15 20:29:55.678388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.427 [2024-05-15 20:29:55.678409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.427 qpair failed and we were unable to recover it. 00:38:03.427 [2024-05-15 20:29:55.688174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.427 [2024-05-15 20:29:55.688263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.427 [2024-05-15 20:29:55.688302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.427 [2024-05-15 20:29:55.688323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.427 [2024-05-15 20:29:55.688331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.427 [2024-05-15 20:29:55.688355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.427 qpair failed and we were unable to recover it. 00:38:03.427 [2024-05-15 20:29:55.698330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.427 [2024-05-15 20:29:55.698416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.427 [2024-05-15 20:29:55.698446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.427 [2024-05-15 20:29:55.698455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.427 [2024-05-15 20:29:55.698462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.427 [2024-05-15 20:29:55.698485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.427 qpair failed and we were unable to recover it. 00:38:03.427 [2024-05-15 20:29:55.708382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.427 [2024-05-15 20:29:55.708485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.427 [2024-05-15 20:29:55.708514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.427 [2024-05-15 20:29:55.708524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.427 [2024-05-15 20:29:55.708531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.427 [2024-05-15 20:29:55.708552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.427 qpair failed and we were unable to recover it. 00:38:03.427 [2024-05-15 20:29:55.718382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.427 [2024-05-15 20:29:55.718469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.427 [2024-05-15 20:29:55.718498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.427 [2024-05-15 20:29:55.718507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.427 [2024-05-15 20:29:55.718514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.427 [2024-05-15 20:29:55.718535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.427 qpair failed and we were unable to recover it. 00:38:03.427 [2024-05-15 20:29:55.728561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.427 [2024-05-15 20:29:55.728656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.427 [2024-05-15 20:29:55.728685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.427 [2024-05-15 20:29:55.728694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.427 [2024-05-15 20:29:55.728701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.427 [2024-05-15 20:29:55.728729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.427 qpair failed and we were unable to recover it. 00:38:03.427 [2024-05-15 20:29:55.738543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.427 [2024-05-15 20:29:55.738712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.427 [2024-05-15 20:29:55.738741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.427 [2024-05-15 20:29:55.738750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.427 [2024-05-15 20:29:55.738757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.427 [2024-05-15 20:29:55.738778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.427 qpair failed and we were unable to recover it. 00:38:03.427 [2024-05-15 20:29:55.748463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.427 [2024-05-15 20:29:55.748574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.427 [2024-05-15 20:29:55.748601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.427 [2024-05-15 20:29:55.748610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.427 [2024-05-15 20:29:55.748617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.427 [2024-05-15 20:29:55.748636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.427 qpair failed and we were unable to recover it. 00:38:03.427 [2024-05-15 20:29:55.758584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.427 [2024-05-15 20:29:55.758675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.427 [2024-05-15 20:29:55.758703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.427 [2024-05-15 20:29:55.758712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.427 [2024-05-15 20:29:55.758719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.428 [2024-05-15 20:29:55.758740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.428 qpair failed and we were unable to recover it. 00:38:03.428 [2024-05-15 20:29:55.768593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.428 [2024-05-15 20:29:55.768697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.428 [2024-05-15 20:29:55.768724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.428 [2024-05-15 20:29:55.768734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.428 [2024-05-15 20:29:55.768741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.428 [2024-05-15 20:29:55.768760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.428 qpair failed and we were unable to recover it. 00:38:03.428 [2024-05-15 20:29:55.778545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.428 [2024-05-15 20:29:55.778630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.428 [2024-05-15 20:29:55.778664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.428 [2024-05-15 20:29:55.778673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.428 [2024-05-15 20:29:55.778680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.428 [2024-05-15 20:29:55.778701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.428 qpair failed and we were unable to recover it. 00:38:03.428 [2024-05-15 20:29:55.788613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.428 [2024-05-15 20:29:55.788718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.428 [2024-05-15 20:29:55.788746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.428 [2024-05-15 20:29:55.788756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.428 [2024-05-15 20:29:55.788762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.428 [2024-05-15 20:29:55.788782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.428 qpair failed and we were unable to recover it. 00:38:03.428 [2024-05-15 20:29:55.798646] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.428 [2024-05-15 20:29:55.798743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.428 [2024-05-15 20:29:55.798771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.428 [2024-05-15 20:29:55.798779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.428 [2024-05-15 20:29:55.798786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.428 [2024-05-15 20:29:55.798808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.428 qpair failed and we were unable to recover it. 00:38:03.428 [2024-05-15 20:29:55.808721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.428 [2024-05-15 20:29:55.808840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.428 [2024-05-15 20:29:55.808868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.428 [2024-05-15 20:29:55.808878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.428 [2024-05-15 20:29:55.808885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.428 [2024-05-15 20:29:55.808906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.428 qpair failed and we were unable to recover it. 00:38:03.428 [2024-05-15 20:29:55.818691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.428 [2024-05-15 20:29:55.818781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.428 [2024-05-15 20:29:55.818810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.428 [2024-05-15 20:29:55.818819] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.428 [2024-05-15 20:29:55.818826] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.428 [2024-05-15 20:29:55.818853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.428 qpair failed and we were unable to recover it. 00:38:03.428 [2024-05-15 20:29:55.828624] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.428 [2024-05-15 20:29:55.828760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.428 [2024-05-15 20:29:55.828789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.428 [2024-05-15 20:29:55.828799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.428 [2024-05-15 20:29:55.828806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.428 [2024-05-15 20:29:55.828828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.428 qpair failed and we were unable to recover it. 00:38:03.428 [2024-05-15 20:29:55.838805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.428 [2024-05-15 20:29:55.838894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.428 [2024-05-15 20:29:55.838921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.428 [2024-05-15 20:29:55.838930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.428 [2024-05-15 20:29:55.838937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.428 [2024-05-15 20:29:55.838959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.428 qpair failed and we were unable to recover it. 00:38:03.428 [2024-05-15 20:29:55.848794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.428 [2024-05-15 20:29:55.848919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.428 [2024-05-15 20:29:55.848948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.428 [2024-05-15 20:29:55.848958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.428 [2024-05-15 20:29:55.848964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.428 [2024-05-15 20:29:55.848984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.428 qpair failed and we were unable to recover it. 00:38:03.428 [2024-05-15 20:29:55.858848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.428 [2024-05-15 20:29:55.858939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.428 [2024-05-15 20:29:55.858967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.428 [2024-05-15 20:29:55.858977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.428 [2024-05-15 20:29:55.858983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.428 [2024-05-15 20:29:55.859004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.428 qpair failed and we were unable to recover it. 00:38:03.428 [2024-05-15 20:29:55.868871] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.428 [2024-05-15 20:29:55.868976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.428 [2024-05-15 20:29:55.869023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.428 [2024-05-15 20:29:55.869034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.428 [2024-05-15 20:29:55.869041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.428 [2024-05-15 20:29:55.869066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.428 qpair failed and we were unable to recover it. 00:38:03.428 [2024-05-15 20:29:55.878909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.428 [2024-05-15 20:29:55.878997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.428 [2024-05-15 20:29:55.879026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.428 [2024-05-15 20:29:55.879036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.428 [2024-05-15 20:29:55.879043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.428 [2024-05-15 20:29:55.879064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.428 qpair failed and we were unable to recover it. 00:38:03.428 [2024-05-15 20:29:55.888893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.428 [2024-05-15 20:29:55.888982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.428 [2024-05-15 20:29:55.889010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.428 [2024-05-15 20:29:55.889019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.428 [2024-05-15 20:29:55.889026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.428 [2024-05-15 20:29:55.889047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.428 qpair failed and we were unable to recover it. 00:38:03.428 [2024-05-15 20:29:55.898915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.428 [2024-05-15 20:29:55.899006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.428 [2024-05-15 20:29:55.899034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.428 [2024-05-15 20:29:55.899043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.429 [2024-05-15 20:29:55.899050] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.429 [2024-05-15 20:29:55.899071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.429 qpair failed and we were unable to recover it. 00:38:03.429 [2024-05-15 20:29:55.908936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.429 [2024-05-15 20:29:55.909073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.429 [2024-05-15 20:29:55.909101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.429 [2024-05-15 20:29:55.909110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.429 [2024-05-15 20:29:55.909127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.429 [2024-05-15 20:29:55.909148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.429 qpair failed and we were unable to recover it. 00:38:03.429 [2024-05-15 20:29:55.918962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.429 [2024-05-15 20:29:55.919060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.429 [2024-05-15 20:29:55.919089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.429 [2024-05-15 20:29:55.919098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.429 [2024-05-15 20:29:55.919105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.429 [2024-05-15 20:29:55.919126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.429 qpair failed and we were unable to recover it. 00:38:03.691 [2024-05-15 20:29:55.929000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.691 [2024-05-15 20:29:55.929085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.691 [2024-05-15 20:29:55.929113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.691 [2024-05-15 20:29:55.929123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.691 [2024-05-15 20:29:55.929131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.691 [2024-05-15 20:29:55.929152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.691 qpair failed and we were unable to recover it. 00:38:03.691 [2024-05-15 20:29:55.939108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.691 [2024-05-15 20:29:55.939197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.691 [2024-05-15 20:29:55.939224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.691 [2024-05-15 20:29:55.939234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.691 [2024-05-15 20:29:55.939243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.691 [2024-05-15 20:29:55.939264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.691 qpair failed and we were unable to recover it. 00:38:03.691 [2024-05-15 20:29:55.949091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.691 [2024-05-15 20:29:55.949190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.691 [2024-05-15 20:29:55.949219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.691 [2024-05-15 20:29:55.949230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.691 [2024-05-15 20:29:55.949238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.691 [2024-05-15 20:29:55.949261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.691 qpair failed and we were unable to recover it. 00:38:03.691 [2024-05-15 20:29:55.959134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.691 [2024-05-15 20:29:55.959225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.691 [2024-05-15 20:29:55.959253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.691 [2024-05-15 20:29:55.959263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.691 [2024-05-15 20:29:55.959270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.691 [2024-05-15 20:29:55.959291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.691 qpair failed and we were unable to recover it. 00:38:03.691 [2024-05-15 20:29:55.969069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.691 [2024-05-15 20:29:55.969153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.691 [2024-05-15 20:29:55.969180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.691 [2024-05-15 20:29:55.969189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.691 [2024-05-15 20:29:55.969197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.691 [2024-05-15 20:29:55.969217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.691 qpair failed and we were unable to recover it. 00:38:03.691 [2024-05-15 20:29:55.979133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.691 [2024-05-15 20:29:55.979221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.692 [2024-05-15 20:29:55.979249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.692 [2024-05-15 20:29:55.979258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.692 [2024-05-15 20:29:55.979265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.692 [2024-05-15 20:29:55.979286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.692 qpair failed and we were unable to recover it. 00:38:03.692 [2024-05-15 20:29:55.989182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.692 [2024-05-15 20:29:55.989284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.692 [2024-05-15 20:29:55.989322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.692 [2024-05-15 20:29:55.989332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.692 [2024-05-15 20:29:55.989341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.692 [2024-05-15 20:29:55.989362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.692 qpair failed and we were unable to recover it. 00:38:03.692 [2024-05-15 20:29:55.999227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.692 [2024-05-15 20:29:55.999351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.692 [2024-05-15 20:29:55.999381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.692 [2024-05-15 20:29:55.999393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.692 [2024-05-15 20:29:55.999409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.692 [2024-05-15 20:29:55.999431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.692 qpair failed and we were unable to recover it. 00:38:03.692 [2024-05-15 20:29:56.009232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.692 [2024-05-15 20:29:56.009335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.692 [2024-05-15 20:29:56.009367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.692 [2024-05-15 20:29:56.009376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.692 [2024-05-15 20:29:56.009383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.692 [2024-05-15 20:29:56.009404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.692 qpair failed and we were unable to recover it. 00:38:03.692 [2024-05-15 20:29:56.019251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.692 [2024-05-15 20:29:56.019350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.692 [2024-05-15 20:29:56.019379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.692 [2024-05-15 20:29:56.019388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.692 [2024-05-15 20:29:56.019396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.692 [2024-05-15 20:29:56.019417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.692 qpair failed and we were unable to recover it. 00:38:03.692 [2024-05-15 20:29:56.029278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.692 [2024-05-15 20:29:56.029375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.692 [2024-05-15 20:29:56.029405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.692 [2024-05-15 20:29:56.029413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.692 [2024-05-15 20:29:56.029421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.692 [2024-05-15 20:29:56.029441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.692 qpair failed and we were unable to recover it. 00:38:03.692 [2024-05-15 20:29:56.039282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.692 [2024-05-15 20:29:56.039391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.692 [2024-05-15 20:29:56.039419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.692 [2024-05-15 20:29:56.039429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.692 [2024-05-15 20:29:56.039437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.692 [2024-05-15 20:29:56.039457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.692 qpair failed and we were unable to recover it. 00:38:03.692 [2024-05-15 20:29:56.049219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.692 [2024-05-15 20:29:56.049324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.692 [2024-05-15 20:29:56.049352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.692 [2024-05-15 20:29:56.049361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.692 [2024-05-15 20:29:56.049369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.692 [2024-05-15 20:29:56.049391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.692 qpair failed and we were unable to recover it. 00:38:03.692 [2024-05-15 20:29:56.059366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.692 [2024-05-15 20:29:56.059456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.692 [2024-05-15 20:29:56.059483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.692 [2024-05-15 20:29:56.059492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.692 [2024-05-15 20:29:56.059500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.692 [2024-05-15 20:29:56.059521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.692 qpair failed and we were unable to recover it. 00:38:03.692 [2024-05-15 20:29:56.069388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.692 [2024-05-15 20:29:56.069485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.692 [2024-05-15 20:29:56.069514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.692 [2024-05-15 20:29:56.069523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.692 [2024-05-15 20:29:56.069531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.692 [2024-05-15 20:29:56.069553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.692 qpair failed and we were unable to recover it. 00:38:03.692 [2024-05-15 20:29:56.079461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.692 [2024-05-15 20:29:56.079582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.692 [2024-05-15 20:29:56.079611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.692 [2024-05-15 20:29:56.079620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.692 [2024-05-15 20:29:56.079628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.692 [2024-05-15 20:29:56.079650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.692 qpair failed and we were unable to recover it. 00:38:03.692 [2024-05-15 20:29:56.089475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.692 [2024-05-15 20:29:56.089600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.692 [2024-05-15 20:29:56.089628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.692 [2024-05-15 20:29:56.089637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.692 [2024-05-15 20:29:56.089650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.692 [2024-05-15 20:29:56.089672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.692 qpair failed and we were unable to recover it. 00:38:03.692 [2024-05-15 20:29:56.099522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.692 [2024-05-15 20:29:56.099613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.692 [2024-05-15 20:29:56.099641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.692 [2024-05-15 20:29:56.099650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.692 [2024-05-15 20:29:56.099657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.692 [2024-05-15 20:29:56.099680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.692 qpair failed and we were unable to recover it. 00:38:03.693 [2024-05-15 20:29:56.109561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.693 [2024-05-15 20:29:56.109666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.693 [2024-05-15 20:29:56.109695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.693 [2024-05-15 20:29:56.109704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.693 [2024-05-15 20:29:56.109712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.693 [2024-05-15 20:29:56.109732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.693 qpair failed and we were unable to recover it. 00:38:03.693 [2024-05-15 20:29:56.119534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.693 [2024-05-15 20:29:56.119624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.693 [2024-05-15 20:29:56.119652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.693 [2024-05-15 20:29:56.119661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.693 [2024-05-15 20:29:56.119668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.693 [2024-05-15 20:29:56.119689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.693 qpair failed and we were unable to recover it. 00:38:03.693 [2024-05-15 20:29:56.129588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.693 [2024-05-15 20:29:56.129683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.693 [2024-05-15 20:29:56.129712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.693 [2024-05-15 20:29:56.129721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.693 [2024-05-15 20:29:56.129728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.693 [2024-05-15 20:29:56.129749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.693 qpair failed and we were unable to recover it. 00:38:03.693 [2024-05-15 20:29:56.139619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.693 [2024-05-15 20:29:56.139717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.693 [2024-05-15 20:29:56.139745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.693 [2024-05-15 20:29:56.139755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.693 [2024-05-15 20:29:56.139762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.693 [2024-05-15 20:29:56.139783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.693 qpair failed and we were unable to recover it. 00:38:03.693 [2024-05-15 20:29:56.149673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.693 [2024-05-15 20:29:56.149762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.693 [2024-05-15 20:29:56.149790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.693 [2024-05-15 20:29:56.149800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.693 [2024-05-15 20:29:56.149807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.693 [2024-05-15 20:29:56.149827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.693 qpair failed and we were unable to recover it. 00:38:03.693 [2024-05-15 20:29:56.159570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.693 [2024-05-15 20:29:56.159659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.693 [2024-05-15 20:29:56.159686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.693 [2024-05-15 20:29:56.159695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.693 [2024-05-15 20:29:56.159702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.693 [2024-05-15 20:29:56.159725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.693 qpair failed and we were unable to recover it. 00:38:03.693 [2024-05-15 20:29:56.169702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.693 [2024-05-15 20:29:56.169787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.693 [2024-05-15 20:29:56.169816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.693 [2024-05-15 20:29:56.169827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.693 [2024-05-15 20:29:56.169834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.693 [2024-05-15 20:29:56.169854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.693 qpair failed and we were unable to recover it. 00:38:03.693 [2024-05-15 20:29:56.179738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.693 [2024-05-15 20:29:56.179830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.693 [2024-05-15 20:29:56.179858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.693 [2024-05-15 20:29:56.179874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.693 [2024-05-15 20:29:56.179881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.693 [2024-05-15 20:29:56.179902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.693 qpair failed and we were unable to recover it. 00:38:03.693 [2024-05-15 20:29:56.189745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.693 [2024-05-15 20:29:56.189837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.693 [2024-05-15 20:29:56.189865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.693 [2024-05-15 20:29:56.189875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.693 [2024-05-15 20:29:56.189882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.693 [2024-05-15 20:29:56.189903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.693 qpair failed and we were unable to recover it. 00:38:03.957 [2024-05-15 20:29:56.199847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.957 [2024-05-15 20:29:56.199928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.957 [2024-05-15 20:29:56.199956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.957 [2024-05-15 20:29:56.199966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.957 [2024-05-15 20:29:56.199973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.957 [2024-05-15 20:29:56.199995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.957 qpair failed and we were unable to recover it. 00:38:03.957 [2024-05-15 20:29:56.209845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.957 [2024-05-15 20:29:56.209930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.957 [2024-05-15 20:29:56.209960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.957 [2024-05-15 20:29:56.209969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.957 [2024-05-15 20:29:56.209976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.957 [2024-05-15 20:29:56.209997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.957 qpair failed and we were unable to recover it. 00:38:03.957 [2024-05-15 20:29:56.219928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.957 [2024-05-15 20:29:56.220023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.957 [2024-05-15 20:29:56.220063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.957 [2024-05-15 20:29:56.220073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.957 [2024-05-15 20:29:56.220081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.957 [2024-05-15 20:29:56.220107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.957 qpair failed and we were unable to recover it. 00:38:03.957 [2024-05-15 20:29:56.229952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.957 [2024-05-15 20:29:56.230046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.957 [2024-05-15 20:29:56.230087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.957 [2024-05-15 20:29:56.230098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.957 [2024-05-15 20:29:56.230105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.957 [2024-05-15 20:29:56.230131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.957 qpair failed and we were unable to recover it. 00:38:03.957 [2024-05-15 20:29:56.239982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.957 [2024-05-15 20:29:56.240157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.957 [2024-05-15 20:29:56.240198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.957 [2024-05-15 20:29:56.240208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.957 [2024-05-15 20:29:56.240216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.957 [2024-05-15 20:29:56.240239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.957 qpair failed and we were unable to recover it. 00:38:03.957 [2024-05-15 20:29:56.250016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.957 [2024-05-15 20:29:56.250119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.957 [2024-05-15 20:29:56.250150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.957 [2024-05-15 20:29:56.250159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.957 [2024-05-15 20:29:56.250167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.957 [2024-05-15 20:29:56.250188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.957 qpair failed and we were unable to recover it. 00:38:03.957 [2024-05-15 20:29:56.259963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.957 [2024-05-15 20:29:56.260051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.957 [2024-05-15 20:29:56.260080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.957 [2024-05-15 20:29:56.260089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.957 [2024-05-15 20:29:56.260096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.957 [2024-05-15 20:29:56.260117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.957 qpair failed and we were unable to recover it. 00:38:03.957 [2024-05-15 20:29:56.270072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.957 [2024-05-15 20:29:56.270170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.957 [2024-05-15 20:29:56.270199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.957 [2024-05-15 20:29:56.270215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.957 [2024-05-15 20:29:56.270222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.957 [2024-05-15 20:29:56.270242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.957 qpair failed and we were unable to recover it. 00:38:03.957 [2024-05-15 20:29:56.280085] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.957 [2024-05-15 20:29:56.280178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.957 [2024-05-15 20:29:56.280206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.957 [2024-05-15 20:29:56.280216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.957 [2024-05-15 20:29:56.280223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.957 [2024-05-15 20:29:56.280244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.957 qpair failed and we were unable to recover it. 00:38:03.957 [2024-05-15 20:29:56.290097] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.957 [2024-05-15 20:29:56.290263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.957 [2024-05-15 20:29:56.290293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.957 [2024-05-15 20:29:56.290302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.957 [2024-05-15 20:29:56.290310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.957 [2024-05-15 20:29:56.290341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.957 qpair failed and we were unable to recover it. 00:38:03.957 [2024-05-15 20:29:56.300130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.957 [2024-05-15 20:29:56.300220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.957 [2024-05-15 20:29:56.300248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.957 [2024-05-15 20:29:56.300257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.958 [2024-05-15 20:29:56.300264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.958 [2024-05-15 20:29:56.300285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.958 qpair failed and we were unable to recover it. 00:38:03.958 [2024-05-15 20:29:56.310165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.958 [2024-05-15 20:29:56.310266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.958 [2024-05-15 20:29:56.310295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.958 [2024-05-15 20:29:56.310304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.958 [2024-05-15 20:29:56.310320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.958 [2024-05-15 20:29:56.310341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.958 qpair failed and we were unable to recover it. 00:38:03.958 [2024-05-15 20:29:56.320189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.958 [2024-05-15 20:29:56.320288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.958 [2024-05-15 20:29:56.320325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.958 [2024-05-15 20:29:56.320335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.958 [2024-05-15 20:29:56.320341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.958 [2024-05-15 20:29:56.320363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.958 qpair failed and we were unable to recover it. 00:38:03.958 [2024-05-15 20:29:56.330146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.958 [2024-05-15 20:29:56.330234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.958 [2024-05-15 20:29:56.330262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.958 [2024-05-15 20:29:56.330272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.958 [2024-05-15 20:29:56.330279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.958 [2024-05-15 20:29:56.330299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.958 qpair failed and we were unable to recover it. 00:38:03.958 [2024-05-15 20:29:56.340253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.958 [2024-05-15 20:29:56.340349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.958 [2024-05-15 20:29:56.340379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.958 [2024-05-15 20:29:56.340388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.958 [2024-05-15 20:29:56.340394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.958 [2024-05-15 20:29:56.340416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.958 qpair failed and we were unable to recover it. 00:38:03.958 [2024-05-15 20:29:56.350267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.958 [2024-05-15 20:29:56.350422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.958 [2024-05-15 20:29:56.350451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.958 [2024-05-15 20:29:56.350460] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.958 [2024-05-15 20:29:56.350467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.958 [2024-05-15 20:29:56.350488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.958 qpair failed and we were unable to recover it. 00:38:03.958 [2024-05-15 20:29:56.360293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.958 [2024-05-15 20:29:56.360407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.958 [2024-05-15 20:29:56.360435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.958 [2024-05-15 20:29:56.360453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.958 [2024-05-15 20:29:56.360460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.958 [2024-05-15 20:29:56.360480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.958 qpair failed and we were unable to recover it. 00:38:03.958 [2024-05-15 20:29:56.370328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.958 [2024-05-15 20:29:56.370420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.958 [2024-05-15 20:29:56.370448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.958 [2024-05-15 20:29:56.370456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.958 [2024-05-15 20:29:56.370463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.958 [2024-05-15 20:29:56.370484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.958 qpair failed and we were unable to recover it. 00:38:03.958 [2024-05-15 20:29:56.380381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.958 [2024-05-15 20:29:56.380470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.958 [2024-05-15 20:29:56.380498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.958 [2024-05-15 20:29:56.380507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.958 [2024-05-15 20:29:56.380514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.958 [2024-05-15 20:29:56.380534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.958 qpair failed and we were unable to recover it. 00:38:03.958 [2024-05-15 20:29:56.390419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.958 [2024-05-15 20:29:56.390521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.958 [2024-05-15 20:29:56.390549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.958 [2024-05-15 20:29:56.390559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.958 [2024-05-15 20:29:56.390565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.958 [2024-05-15 20:29:56.390585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.958 qpair failed and we were unable to recover it. 00:38:03.958 [2024-05-15 20:29:56.400454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.958 [2024-05-15 20:29:56.400546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.958 [2024-05-15 20:29:56.400575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.958 [2024-05-15 20:29:56.400583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.958 [2024-05-15 20:29:56.400590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.958 [2024-05-15 20:29:56.400612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.958 qpair failed and we were unable to recover it. 00:38:03.958 [2024-05-15 20:29:56.410480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.958 [2024-05-15 20:29:56.410575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.958 [2024-05-15 20:29:56.410606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.958 [2024-05-15 20:29:56.410614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.958 [2024-05-15 20:29:56.410621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.958 [2024-05-15 20:29:56.410644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.958 qpair failed and we were unable to recover it. 00:38:03.958 [2024-05-15 20:29:56.420503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.958 [2024-05-15 20:29:56.420585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.958 [2024-05-15 20:29:56.420613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.958 [2024-05-15 20:29:56.420622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.958 [2024-05-15 20:29:56.420629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.958 [2024-05-15 20:29:56.420650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.958 qpair failed and we were unable to recover it. 00:38:03.958 [2024-05-15 20:29:56.430516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.958 [2024-05-15 20:29:56.430617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.958 [2024-05-15 20:29:56.430645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.958 [2024-05-15 20:29:56.430654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.958 [2024-05-15 20:29:56.430663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.958 [2024-05-15 20:29:56.430683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.958 qpair failed and we were unable to recover it. 00:38:03.958 [2024-05-15 20:29:56.440620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.958 [2024-05-15 20:29:56.440716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.958 [2024-05-15 20:29:56.440745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.958 [2024-05-15 20:29:56.440754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.959 [2024-05-15 20:29:56.440762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.959 [2024-05-15 20:29:56.440783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.959 qpair failed and we were unable to recover it. 00:38:03.959 [2024-05-15 20:29:56.450583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:03.959 [2024-05-15 20:29:56.450669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:03.959 [2024-05-15 20:29:56.450699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:03.959 [2024-05-15 20:29:56.450714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:03.959 [2024-05-15 20:29:56.450721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:03.959 [2024-05-15 20:29:56.450742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:03.959 qpair failed and we were unable to recover it. 00:38:04.222 [2024-05-15 20:29:56.460619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.222 [2024-05-15 20:29:56.460735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.222 [2024-05-15 20:29:56.460764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.222 [2024-05-15 20:29:56.460773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.222 [2024-05-15 20:29:56.460781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.222 [2024-05-15 20:29:56.460802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.222 qpair failed and we were unable to recover it. 00:38:04.222 [2024-05-15 20:29:56.470653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.222 [2024-05-15 20:29:56.470789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.222 [2024-05-15 20:29:56.470818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.222 [2024-05-15 20:29:56.470827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.222 [2024-05-15 20:29:56.470834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.222 [2024-05-15 20:29:56.470855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.222 qpair failed and we were unable to recover it. 00:38:04.222 [2024-05-15 20:29:56.480670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.222 [2024-05-15 20:29:56.480760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.222 [2024-05-15 20:29:56.480788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.222 [2024-05-15 20:29:56.480796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.222 [2024-05-15 20:29:56.480804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.222 [2024-05-15 20:29:56.480825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.222 qpair failed and we were unable to recover it. 00:38:04.222 [2024-05-15 20:29:56.490703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.222 [2024-05-15 20:29:56.490791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.222 [2024-05-15 20:29:56.490818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.222 [2024-05-15 20:29:56.490828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.222 [2024-05-15 20:29:56.490836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.222 [2024-05-15 20:29:56.490856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.222 qpair failed and we were unable to recover it. 00:38:04.222 [2024-05-15 20:29:56.500729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.222 [2024-05-15 20:29:56.500822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.222 [2024-05-15 20:29:56.500851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.222 [2024-05-15 20:29:56.500859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.222 [2024-05-15 20:29:56.500867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.222 [2024-05-15 20:29:56.500889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.222 qpair failed and we were unable to recover it. 00:38:04.222 [2024-05-15 20:29:56.510767] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.222 [2024-05-15 20:29:56.510895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.222 [2024-05-15 20:29:56.510924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.222 [2024-05-15 20:29:56.510933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.222 [2024-05-15 20:29:56.510940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.222 [2024-05-15 20:29:56.510961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.222 qpair failed and we were unable to recover it. 00:38:04.222 [2024-05-15 20:29:56.520811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.222 [2024-05-15 20:29:56.520902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.222 [2024-05-15 20:29:56.520930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.222 [2024-05-15 20:29:56.520939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.222 [2024-05-15 20:29:56.520946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.222 [2024-05-15 20:29:56.520967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.222 qpair failed and we were unable to recover it. 00:38:04.222 [2024-05-15 20:29:56.530815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.222 [2024-05-15 20:29:56.530928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.222 [2024-05-15 20:29:56.530969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.222 [2024-05-15 20:29:56.530979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.222 [2024-05-15 20:29:56.530988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.222 [2024-05-15 20:29:56.531012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.222 qpair failed and we were unable to recover it. 00:38:04.222 [2024-05-15 20:29:56.540863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.222 [2024-05-15 20:29:56.540987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.222 [2024-05-15 20:29:56.541025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.222 [2024-05-15 20:29:56.541035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.222 [2024-05-15 20:29:56.541042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.222 [2024-05-15 20:29:56.541064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.222 qpair failed and we were unable to recover it. 00:38:04.222 [2024-05-15 20:29:56.550827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.222 [2024-05-15 20:29:56.550957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.222 [2024-05-15 20:29:56.550988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.222 [2024-05-15 20:29:56.550997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.222 [2024-05-15 20:29:56.551003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.222 [2024-05-15 20:29:56.551025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.222 qpair failed and we were unable to recover it. 00:38:04.223 [2024-05-15 20:29:56.560932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.223 [2024-05-15 20:29:56.561033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.223 [2024-05-15 20:29:56.561062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.223 [2024-05-15 20:29:56.561071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.223 [2024-05-15 20:29:56.561079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.223 [2024-05-15 20:29:56.561100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.223 qpair failed and we were unable to recover it. 00:38:04.223 [2024-05-15 20:29:56.571019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.223 [2024-05-15 20:29:56.571111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.223 [2024-05-15 20:29:56.571151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.223 [2024-05-15 20:29:56.571162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.223 [2024-05-15 20:29:56.571170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.223 [2024-05-15 20:29:56.571194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.223 qpair failed and we were unable to recover it. 00:38:04.223 [2024-05-15 20:29:56.580988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.223 [2024-05-15 20:29:56.581110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.223 [2024-05-15 20:29:56.581141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.223 [2024-05-15 20:29:56.581150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.223 [2024-05-15 20:29:56.581157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.223 [2024-05-15 20:29:56.581179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.223 qpair failed and we were unable to recover it. 00:38:04.223 [2024-05-15 20:29:56.591065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.223 [2024-05-15 20:29:56.591175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.223 [2024-05-15 20:29:56.591203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.223 [2024-05-15 20:29:56.591212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.223 [2024-05-15 20:29:56.591219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.223 [2024-05-15 20:29:56.591241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.223 qpair failed and we were unable to recover it. 00:38:04.223 [2024-05-15 20:29:56.601017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.223 [2024-05-15 20:29:56.601102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.223 [2024-05-15 20:29:56.601131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.223 [2024-05-15 20:29:56.601140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.223 [2024-05-15 20:29:56.601147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.223 [2024-05-15 20:29:56.601170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.223 qpair failed and we were unable to recover it. 00:38:04.223 [2024-05-15 20:29:56.611078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.223 [2024-05-15 20:29:56.611210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.223 [2024-05-15 20:29:56.611242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.223 [2024-05-15 20:29:56.611251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.223 [2024-05-15 20:29:56.611258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.223 [2024-05-15 20:29:56.611280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.223 qpair failed and we were unable to recover it. 00:38:04.223 [2024-05-15 20:29:56.621140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.223 [2024-05-15 20:29:56.621309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.223 [2024-05-15 20:29:56.621345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.223 [2024-05-15 20:29:56.621355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.223 [2024-05-15 20:29:56.621362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.223 [2024-05-15 20:29:56.621382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.223 qpair failed and we were unable to recover it. 00:38:04.223 [2024-05-15 20:29:56.631131] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.223 [2024-05-15 20:29:56.631239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.223 [2024-05-15 20:29:56.631275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.223 [2024-05-15 20:29:56.631285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.223 [2024-05-15 20:29:56.631292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.223 [2024-05-15 20:29:56.631321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.223 qpair failed and we were unable to recover it. 00:38:04.223 [2024-05-15 20:29:56.641176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.223 [2024-05-15 20:29:56.641265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.223 [2024-05-15 20:29:56.641292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.223 [2024-05-15 20:29:56.641302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.223 [2024-05-15 20:29:56.641309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.223 [2024-05-15 20:29:56.641340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.223 qpair failed and we were unable to recover it. 00:38:04.223 [2024-05-15 20:29:56.651226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.223 [2024-05-15 20:29:56.651322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.223 [2024-05-15 20:29:56.651351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.223 [2024-05-15 20:29:56.651361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.223 [2024-05-15 20:29:56.651368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.223 [2024-05-15 20:29:56.651389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.223 qpair failed and we were unable to recover it. 00:38:04.223 [2024-05-15 20:29:56.661241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.223 [2024-05-15 20:29:56.661340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.223 [2024-05-15 20:29:56.661370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.223 [2024-05-15 20:29:56.661379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.223 [2024-05-15 20:29:56.661386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.223 [2024-05-15 20:29:56.661408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.223 qpair failed and we were unable to recover it. 00:38:04.223 [2024-05-15 20:29:56.671317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.223 [2024-05-15 20:29:56.671434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.223 [2024-05-15 20:29:56.671463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.223 [2024-05-15 20:29:56.671472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.223 [2024-05-15 20:29:56.671480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.223 [2024-05-15 20:29:56.671507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.223 qpair failed and we were unable to recover it. 00:38:04.223 [2024-05-15 20:29:56.681252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.223 [2024-05-15 20:29:56.681343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.223 [2024-05-15 20:29:56.681373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.223 [2024-05-15 20:29:56.681381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.223 [2024-05-15 20:29:56.681388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.223 [2024-05-15 20:29:56.681409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.223 qpair failed and we were unable to recover it. 00:38:04.223 [2024-05-15 20:29:56.691321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.223 [2024-05-15 20:29:56.691416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.223 [2024-05-15 20:29:56.691445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.223 [2024-05-15 20:29:56.691454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.223 [2024-05-15 20:29:56.691461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.223 [2024-05-15 20:29:56.691481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.224 qpair failed and we were unable to recover it. 00:38:04.224 [2024-05-15 20:29:56.701377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.224 [2024-05-15 20:29:56.701466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.224 [2024-05-15 20:29:56.701494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.224 [2024-05-15 20:29:56.701502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.224 [2024-05-15 20:29:56.701510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.224 [2024-05-15 20:29:56.701531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.224 qpair failed and we were unable to recover it. 00:38:04.224 [2024-05-15 20:29:56.711292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.224 [2024-05-15 20:29:56.711395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.224 [2024-05-15 20:29:56.711424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.224 [2024-05-15 20:29:56.711433] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.224 [2024-05-15 20:29:56.711439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.224 [2024-05-15 20:29:56.711459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.224 qpair failed and we were unable to recover it. 00:38:04.224 [2024-05-15 20:29:56.721442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.224 [2024-05-15 20:29:56.721533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.224 [2024-05-15 20:29:56.721573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.224 [2024-05-15 20:29:56.721584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.224 [2024-05-15 20:29:56.721591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.224 [2024-05-15 20:29:56.721612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.224 qpair failed and we were unable to recover it. 00:38:04.486 [2024-05-15 20:29:56.731472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.486 [2024-05-15 20:29:56.731563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.486 [2024-05-15 20:29:56.731594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.486 [2024-05-15 20:29:56.731603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.486 [2024-05-15 20:29:56.731609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.486 [2024-05-15 20:29:56.731630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.486 qpair failed and we were unable to recover it. 00:38:04.486 [2024-05-15 20:29:56.741429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.486 [2024-05-15 20:29:56.741526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.486 [2024-05-15 20:29:56.741554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.486 [2024-05-15 20:29:56.741564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.486 [2024-05-15 20:29:56.741571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.486 [2024-05-15 20:29:56.741592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.486 qpair failed and we were unable to recover it. 00:38:04.486 [2024-05-15 20:29:56.751434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.486 [2024-05-15 20:29:56.751523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.486 [2024-05-15 20:29:56.751551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.486 [2024-05-15 20:29:56.751561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.486 [2024-05-15 20:29:56.751569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.486 [2024-05-15 20:29:56.751589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.486 qpair failed and we were unable to recover it. 00:38:04.486 [2024-05-15 20:29:56.761560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.486 [2024-05-15 20:29:56.761650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.486 [2024-05-15 20:29:56.761679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.486 [2024-05-15 20:29:56.761687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.486 [2024-05-15 20:29:56.761695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.486 [2024-05-15 20:29:56.761722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.486 qpair failed and we were unable to recover it. 00:38:04.486 [2024-05-15 20:29:56.771590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.486 [2024-05-15 20:29:56.771678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.486 [2024-05-15 20:29:56.771706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.486 [2024-05-15 20:29:56.771715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.486 [2024-05-15 20:29:56.771722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.486 [2024-05-15 20:29:56.771742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.486 qpair failed and we were unable to recover it. 00:38:04.486 [2024-05-15 20:29:56.781612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.486 [2024-05-15 20:29:56.781696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.486 [2024-05-15 20:29:56.781723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.486 [2024-05-15 20:29:56.781733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.487 [2024-05-15 20:29:56.781740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.487 [2024-05-15 20:29:56.781760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.487 qpair failed and we were unable to recover it. 00:38:04.487 [2024-05-15 20:29:56.791692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.487 [2024-05-15 20:29:56.791790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.487 [2024-05-15 20:29:56.791819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.487 [2024-05-15 20:29:56.791828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.487 [2024-05-15 20:29:56.791835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.487 [2024-05-15 20:29:56.791856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.487 qpair failed and we were unable to recover it. 00:38:04.487 [2024-05-15 20:29:56.801675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.487 [2024-05-15 20:29:56.801767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.487 [2024-05-15 20:29:56.801795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.487 [2024-05-15 20:29:56.801804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.487 [2024-05-15 20:29:56.801811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.487 [2024-05-15 20:29:56.801831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.487 qpair failed and we were unable to recover it. 00:38:04.487 [2024-05-15 20:29:56.811698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.487 [2024-05-15 20:29:56.811789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.487 [2024-05-15 20:29:56.811823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.487 [2024-05-15 20:29:56.811832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.487 [2024-05-15 20:29:56.811839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.487 [2024-05-15 20:29:56.811860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.487 qpair failed and we were unable to recover it. 00:38:04.487 [2024-05-15 20:29:56.821760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.487 [2024-05-15 20:29:56.821869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.487 [2024-05-15 20:29:56.821897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.487 [2024-05-15 20:29:56.821905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.487 [2024-05-15 20:29:56.821913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.487 [2024-05-15 20:29:56.821932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.487 qpair failed and we were unable to recover it. 00:38:04.487 [2024-05-15 20:29:56.831803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.487 [2024-05-15 20:29:56.831894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.487 [2024-05-15 20:29:56.831924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.487 [2024-05-15 20:29:56.831932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.487 [2024-05-15 20:29:56.831941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.487 [2024-05-15 20:29:56.831963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.487 qpair failed and we were unable to recover it. 00:38:04.487 [2024-05-15 20:29:56.841800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.487 [2024-05-15 20:29:56.841887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.487 [2024-05-15 20:29:56.841927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.487 [2024-05-15 20:29:56.841938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.487 [2024-05-15 20:29:56.841946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.487 [2024-05-15 20:29:56.841970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.487 qpair failed and we were unable to recover it. 00:38:04.487 [2024-05-15 20:29:56.851732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.487 [2024-05-15 20:29:56.851826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.487 [2024-05-15 20:29:56.851857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.487 [2024-05-15 20:29:56.851867] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.487 [2024-05-15 20:29:56.851881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.487 [2024-05-15 20:29:56.851904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.487 qpair failed and we were unable to recover it. 00:38:04.487 [2024-05-15 20:29:56.861917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.487 [2024-05-15 20:29:56.862005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.487 [2024-05-15 20:29:56.862034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.487 [2024-05-15 20:29:56.862044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.487 [2024-05-15 20:29:56.862051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.487 [2024-05-15 20:29:56.862072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.487 qpair failed and we were unable to recover it. 00:38:04.487 [2024-05-15 20:29:56.871904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.487 [2024-05-15 20:29:56.872011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.487 [2024-05-15 20:29:56.872051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.487 [2024-05-15 20:29:56.872062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.487 [2024-05-15 20:29:56.872072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.487 [2024-05-15 20:29:56.872096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.487 qpair failed and we were unable to recover it. 00:38:04.487 [2024-05-15 20:29:56.881911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.487 [2024-05-15 20:29:56.881999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.487 [2024-05-15 20:29:56.882039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.487 [2024-05-15 20:29:56.882050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.487 [2024-05-15 20:29:56.882058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.487 [2024-05-15 20:29:56.882082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.487 qpair failed and we were unable to recover it. 00:38:04.487 [2024-05-15 20:29:56.891953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.487 [2024-05-15 20:29:56.892037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.487 [2024-05-15 20:29:56.892068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.487 [2024-05-15 20:29:56.892078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.487 [2024-05-15 20:29:56.892086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.487 [2024-05-15 20:29:56.892108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.487 qpair failed and we were unable to recover it. 00:38:04.487 [2024-05-15 20:29:56.902017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.487 [2024-05-15 20:29:56.902157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.487 [2024-05-15 20:29:56.902193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.487 [2024-05-15 20:29:56.902203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.487 [2024-05-15 20:29:56.902209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.487 [2024-05-15 20:29:56.902232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.487 qpair failed and we were unable to recover it. 00:38:04.487 [2024-05-15 20:29:56.911949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.487 [2024-05-15 20:29:56.912053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.487 [2024-05-15 20:29:56.912082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.487 [2024-05-15 20:29:56.912093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.487 [2024-05-15 20:29:56.912101] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.487 [2024-05-15 20:29:56.912122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.487 qpair failed and we were unable to recover it. 00:38:04.487 [2024-05-15 20:29:56.922041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.487 [2024-05-15 20:29:56.922124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.487 [2024-05-15 20:29:56.922153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.487 [2024-05-15 20:29:56.922162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.488 [2024-05-15 20:29:56.922170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.488 [2024-05-15 20:29:56.922191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.488 qpair failed and we were unable to recover it. 00:38:04.488 [2024-05-15 20:29:56.932069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.488 [2024-05-15 20:29:56.932153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.488 [2024-05-15 20:29:56.932181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.488 [2024-05-15 20:29:56.932189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.488 [2024-05-15 20:29:56.932196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.488 [2024-05-15 20:29:56.932217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.488 qpair failed and we were unable to recover it. 00:38:04.488 [2024-05-15 20:29:56.942142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.488 [2024-05-15 20:29:56.942225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.488 [2024-05-15 20:29:56.942257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.488 [2024-05-15 20:29:56.942266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.488 [2024-05-15 20:29:56.942281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.488 [2024-05-15 20:29:56.942303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.488 qpair failed and we were unable to recover it. 00:38:04.488 [2024-05-15 20:29:56.952151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.488 [2024-05-15 20:29:56.952247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.488 [2024-05-15 20:29:56.952276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.488 [2024-05-15 20:29:56.952286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.488 [2024-05-15 20:29:56.952293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.488 [2024-05-15 20:29:56.952321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.488 qpair failed and we were unable to recover it. 00:38:04.488 [2024-05-15 20:29:56.962204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.488 [2024-05-15 20:29:56.962292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.488 [2024-05-15 20:29:56.962325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.488 [2024-05-15 20:29:56.962335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.488 [2024-05-15 20:29:56.962342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.488 [2024-05-15 20:29:56.962362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.488 qpair failed and we were unable to recover it. 00:38:04.488 [2024-05-15 20:29:56.972189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.488 [2024-05-15 20:29:56.972272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.488 [2024-05-15 20:29:56.972301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.488 [2024-05-15 20:29:56.972310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.488 [2024-05-15 20:29:56.972323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.488 [2024-05-15 20:29:56.972344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.488 qpair failed and we were unable to recover it. 00:38:04.488 [2024-05-15 20:29:56.982246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.488 [2024-05-15 20:29:56.982338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.488 [2024-05-15 20:29:56.982367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.488 [2024-05-15 20:29:56.982377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.488 [2024-05-15 20:29:56.982384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.488 [2024-05-15 20:29:56.982405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.488 qpair failed and we were unable to recover it. 00:38:04.750 [2024-05-15 20:29:56.992288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.750 [2024-05-15 20:29:56.992399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.750 [2024-05-15 20:29:56.992428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.750 [2024-05-15 20:29:56.992437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.750 [2024-05-15 20:29:56.992444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.750 [2024-05-15 20:29:56.992465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.750 qpair failed and we were unable to recover it. 00:38:04.750 [2024-05-15 20:29:57.002318] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.750 [2024-05-15 20:29:57.002412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.750 [2024-05-15 20:29:57.002440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.750 [2024-05-15 20:29:57.002449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.750 [2024-05-15 20:29:57.002456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.750 [2024-05-15 20:29:57.002479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.750 qpair failed and we were unable to recover it. 00:38:04.750 [2024-05-15 20:29:57.012379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.750 [2024-05-15 20:29:57.012471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.750 [2024-05-15 20:29:57.012499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.750 [2024-05-15 20:29:57.012509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.750 [2024-05-15 20:29:57.012516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.750 [2024-05-15 20:29:57.012536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.750 qpair failed and we were unable to recover it. 00:38:04.750 [2024-05-15 20:29:57.022594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.750 [2024-05-15 20:29:57.022752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.750 [2024-05-15 20:29:57.022780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.750 [2024-05-15 20:29:57.022788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.750 [2024-05-15 20:29:57.022795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.751 [2024-05-15 20:29:57.022816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.751 qpair failed and we were unable to recover it. 00:38:04.751 [2024-05-15 20:29:57.032440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.751 [2024-05-15 20:29:57.032535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.751 [2024-05-15 20:29:57.032564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.751 [2024-05-15 20:29:57.032573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.751 [2024-05-15 20:29:57.032587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.751 [2024-05-15 20:29:57.032608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.751 qpair failed and we were unable to recover it. 00:38:04.751 [2024-05-15 20:29:57.042413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.751 [2024-05-15 20:29:57.042503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.751 [2024-05-15 20:29:57.042531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.751 [2024-05-15 20:29:57.042539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.751 [2024-05-15 20:29:57.042546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.751 [2024-05-15 20:29:57.042566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.751 qpair failed and we were unable to recover it. 00:38:04.751 [2024-05-15 20:29:57.052490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.751 [2024-05-15 20:29:57.052581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.751 [2024-05-15 20:29:57.052608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.751 [2024-05-15 20:29:57.052617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.751 [2024-05-15 20:29:57.052625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.751 [2024-05-15 20:29:57.052645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.751 qpair failed and we were unable to recover it. 00:38:04.751 [2024-05-15 20:29:57.062514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.751 [2024-05-15 20:29:57.062599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.751 [2024-05-15 20:29:57.062627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.751 [2024-05-15 20:29:57.062636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.751 [2024-05-15 20:29:57.062643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.751 [2024-05-15 20:29:57.062663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.751 qpair failed and we were unable to recover it. 00:38:04.751 [2024-05-15 20:29:57.072423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.751 [2024-05-15 20:29:57.072512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.751 [2024-05-15 20:29:57.072541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.751 [2024-05-15 20:29:57.072550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.751 [2024-05-15 20:29:57.072557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.751 [2024-05-15 20:29:57.072577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.751 qpair failed and we were unable to recover it. 00:38:04.751 [2024-05-15 20:29:57.082564] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.751 [2024-05-15 20:29:57.082659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.751 [2024-05-15 20:29:57.082686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.751 [2024-05-15 20:29:57.082695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.751 [2024-05-15 20:29:57.082702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.751 [2024-05-15 20:29:57.082724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.751 qpair failed and we were unable to recover it. 00:38:04.751 [2024-05-15 20:29:57.092586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.751 [2024-05-15 20:29:57.092704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.751 [2024-05-15 20:29:57.092732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.751 [2024-05-15 20:29:57.092742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.751 [2024-05-15 20:29:57.092749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.751 [2024-05-15 20:29:57.092769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.751 qpair failed and we were unable to recover it. 00:38:04.751 [2024-05-15 20:29:57.102636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.751 [2024-05-15 20:29:57.102769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.751 [2024-05-15 20:29:57.102797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.751 [2024-05-15 20:29:57.102806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.751 [2024-05-15 20:29:57.102813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.751 [2024-05-15 20:29:57.102833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.751 qpair failed and we were unable to recover it. 00:38:04.751 [2024-05-15 20:29:57.112668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.751 [2024-05-15 20:29:57.112771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.751 [2024-05-15 20:29:57.112802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.751 [2024-05-15 20:29:57.112810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.751 [2024-05-15 20:29:57.112817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.751 [2024-05-15 20:29:57.112838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.751 qpair failed and we were unable to recover it. 00:38:04.751 [2024-05-15 20:29:57.122680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.751 [2024-05-15 20:29:57.122773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.751 [2024-05-15 20:29:57.122802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.751 [2024-05-15 20:29:57.122811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.751 [2024-05-15 20:29:57.122824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.751 [2024-05-15 20:29:57.122846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.751 qpair failed and we were unable to recover it. 00:38:04.751 [2024-05-15 20:29:57.132684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.751 [2024-05-15 20:29:57.132772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.751 [2024-05-15 20:29:57.132800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.751 [2024-05-15 20:29:57.132810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.751 [2024-05-15 20:29:57.132817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.751 [2024-05-15 20:29:57.132838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.751 qpair failed and we were unable to recover it. 00:38:04.751 [2024-05-15 20:29:57.142749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.751 [2024-05-15 20:29:57.142834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.751 [2024-05-15 20:29:57.142864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.751 [2024-05-15 20:29:57.142872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.751 [2024-05-15 20:29:57.142880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.751 [2024-05-15 20:29:57.142902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.751 qpair failed and we were unable to recover it. 00:38:04.751 [2024-05-15 20:29:57.152784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.751 [2024-05-15 20:29:57.152887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.751 [2024-05-15 20:29:57.152915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.751 [2024-05-15 20:29:57.152924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.751 [2024-05-15 20:29:57.152932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.751 [2024-05-15 20:29:57.152953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.751 qpair failed and we were unable to recover it. 00:38:04.751 [2024-05-15 20:29:57.162786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.751 [2024-05-15 20:29:57.162877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.752 [2024-05-15 20:29:57.162906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.752 [2024-05-15 20:29:57.162915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.752 [2024-05-15 20:29:57.162923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.752 [2024-05-15 20:29:57.162944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.752 qpair failed and we were unable to recover it. 00:38:04.752 [2024-05-15 20:29:57.172836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.752 [2024-05-15 20:29:57.172930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.752 [2024-05-15 20:29:57.172960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.752 [2024-05-15 20:29:57.172969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.752 [2024-05-15 20:29:57.172977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.752 [2024-05-15 20:29:57.172999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.752 qpair failed and we were unable to recover it. 00:38:04.752 [2024-05-15 20:29:57.182926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.752 [2024-05-15 20:29:57.183070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.752 [2024-05-15 20:29:57.183110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.752 [2024-05-15 20:29:57.183121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.752 [2024-05-15 20:29:57.183128] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.752 [2024-05-15 20:29:57.183153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.752 qpair failed and we were unable to recover it. 00:38:04.752 [2024-05-15 20:29:57.192907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.752 [2024-05-15 20:29:57.193009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.752 [2024-05-15 20:29:57.193049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.752 [2024-05-15 20:29:57.193059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.752 [2024-05-15 20:29:57.193067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.752 [2024-05-15 20:29:57.193092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.752 qpair failed and we were unable to recover it. 00:38:04.752 [2024-05-15 20:29:57.202970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.752 [2024-05-15 20:29:57.203084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.752 [2024-05-15 20:29:57.203124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.752 [2024-05-15 20:29:57.203134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.752 [2024-05-15 20:29:57.203141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.752 [2024-05-15 20:29:57.203166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.752 qpair failed and we were unable to recover it. 00:38:04.752 [2024-05-15 20:29:57.212933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.752 [2024-05-15 20:29:57.213025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.752 [2024-05-15 20:29:57.213055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.752 [2024-05-15 20:29:57.213073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.752 [2024-05-15 20:29:57.213080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.752 [2024-05-15 20:29:57.213101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.752 qpair failed and we were unable to recover it. 00:38:04.752 [2024-05-15 20:29:57.223000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.752 [2024-05-15 20:29:57.223082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.752 [2024-05-15 20:29:57.223110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.752 [2024-05-15 20:29:57.223119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.752 [2024-05-15 20:29:57.223126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.752 [2024-05-15 20:29:57.223147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.752 qpair failed and we were unable to recover it. 00:38:04.752 [2024-05-15 20:29:57.233043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.752 [2024-05-15 20:29:57.233157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.752 [2024-05-15 20:29:57.233186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.752 [2024-05-15 20:29:57.233194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.752 [2024-05-15 20:29:57.233201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.752 [2024-05-15 20:29:57.233222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.752 qpair failed and we were unable to recover it. 00:38:04.752 [2024-05-15 20:29:57.243029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:04.752 [2024-05-15 20:29:57.243119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:04.752 [2024-05-15 20:29:57.243147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:04.752 [2024-05-15 20:29:57.243156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:04.752 [2024-05-15 20:29:57.243164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:04.752 [2024-05-15 20:29:57.243185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:04.752 qpair failed and we were unable to recover it. 00:38:05.014 [2024-05-15 20:29:57.253191] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.014 [2024-05-15 20:29:57.253280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.014 [2024-05-15 20:29:57.253307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.014 [2024-05-15 20:29:57.253322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.014 [2024-05-15 20:29:57.253330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.014 [2024-05-15 20:29:57.253351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.014 qpair failed and we were unable to recover it. 00:38:05.014 [2024-05-15 20:29:57.263144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.014 [2024-05-15 20:29:57.263225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.014 [2024-05-15 20:29:57.263250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.014 [2024-05-15 20:29:57.263259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.014 [2024-05-15 20:29:57.263265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.014 [2024-05-15 20:29:57.263284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.014 qpair failed and we were unable to recover it. 00:38:05.014 [2024-05-15 20:29:57.273145] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.014 [2024-05-15 20:29:57.273244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.014 [2024-05-15 20:29:57.273268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.014 [2024-05-15 20:29:57.273278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.014 [2024-05-15 20:29:57.273284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.015 [2024-05-15 20:29:57.273304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.015 qpair failed and we were unable to recover it. 00:38:05.015 [2024-05-15 20:29:57.283205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.015 [2024-05-15 20:29:57.283283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.015 [2024-05-15 20:29:57.283305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.015 [2024-05-15 20:29:57.283318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.015 [2024-05-15 20:29:57.283325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.015 [2024-05-15 20:29:57.283344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.015 qpair failed and we were unable to recover it. 00:38:05.015 [2024-05-15 20:29:57.293208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.015 [2024-05-15 20:29:57.293295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.015 [2024-05-15 20:29:57.293324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.015 [2024-05-15 20:29:57.293332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.015 [2024-05-15 20:29:57.293339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.015 [2024-05-15 20:29:57.293356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.015 qpair failed and we were unable to recover it. 00:38:05.015 [2024-05-15 20:29:57.303234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.015 [2024-05-15 20:29:57.303345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.015 [2024-05-15 20:29:57.303367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.015 [2024-05-15 20:29:57.303380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.015 [2024-05-15 20:29:57.303387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.015 [2024-05-15 20:29:57.303406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.015 qpair failed and we were unable to recover it. 00:38:05.015 [2024-05-15 20:29:57.313262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.015 [2024-05-15 20:29:57.313353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.015 [2024-05-15 20:29:57.313375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.015 [2024-05-15 20:29:57.313383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.015 [2024-05-15 20:29:57.313390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.015 [2024-05-15 20:29:57.313407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.015 qpair failed and we were unable to recover it. 00:38:05.015 [2024-05-15 20:29:57.323259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.015 [2024-05-15 20:29:57.323349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.015 [2024-05-15 20:29:57.323370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.015 [2024-05-15 20:29:57.323379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.015 [2024-05-15 20:29:57.323386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.015 [2024-05-15 20:29:57.323403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.015 qpair failed and we were unable to recover it. 00:38:05.015 [2024-05-15 20:29:57.333342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.015 [2024-05-15 20:29:57.333449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.015 [2024-05-15 20:29:57.333470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.015 [2024-05-15 20:29:57.333479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.015 [2024-05-15 20:29:57.333486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.015 [2024-05-15 20:29:57.333503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.015 qpair failed and we were unable to recover it. 00:38:05.015 [2024-05-15 20:29:57.343338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.015 [2024-05-15 20:29:57.343419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.015 [2024-05-15 20:29:57.343440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.015 [2024-05-15 20:29:57.343448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.015 [2024-05-15 20:29:57.343455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.015 [2024-05-15 20:29:57.343472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.015 qpair failed and we were unable to recover it. 00:38:05.015 [2024-05-15 20:29:57.353340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.015 [2024-05-15 20:29:57.353501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.015 [2024-05-15 20:29:57.353521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.015 [2024-05-15 20:29:57.353529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.015 [2024-05-15 20:29:57.353535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.015 [2024-05-15 20:29:57.353552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.015 qpair failed and we were unable to recover it. 00:38:05.015 [2024-05-15 20:29:57.363270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.015 [2024-05-15 20:29:57.363357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.015 [2024-05-15 20:29:57.363377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.015 [2024-05-15 20:29:57.363385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.015 [2024-05-15 20:29:57.363392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.015 [2024-05-15 20:29:57.363409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.015 qpair failed and we were unable to recover it. 00:38:05.015 [2024-05-15 20:29:57.373416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.015 [2024-05-15 20:29:57.373491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.015 [2024-05-15 20:29:57.373511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.015 [2024-05-15 20:29:57.373518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.015 [2024-05-15 20:29:57.373524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.015 [2024-05-15 20:29:57.373540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.015 qpair failed and we were unable to recover it. 00:38:05.015 [2024-05-15 20:29:57.383431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.015 [2024-05-15 20:29:57.383534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.015 [2024-05-15 20:29:57.383552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.015 [2024-05-15 20:29:57.383561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.015 [2024-05-15 20:29:57.383567] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.015 [2024-05-15 20:29:57.383583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.015 qpair failed and we were unable to recover it. 00:38:05.015 [2024-05-15 20:29:57.393531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.015 [2024-05-15 20:29:57.393617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.015 [2024-05-15 20:29:57.393636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.015 [2024-05-15 20:29:57.393648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.015 [2024-05-15 20:29:57.393654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.015 [2024-05-15 20:29:57.393670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.015 qpair failed and we were unable to recover it. 00:38:05.015 [2024-05-15 20:29:57.403453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.015 [2024-05-15 20:29:57.403528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.015 [2024-05-15 20:29:57.403547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.015 [2024-05-15 20:29:57.403555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.015 [2024-05-15 20:29:57.403561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.016 [2024-05-15 20:29:57.403577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.016 qpair failed and we were unable to recover it. 00:38:05.016 [2024-05-15 20:29:57.413393] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.016 [2024-05-15 20:29:57.413474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.016 [2024-05-15 20:29:57.413493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.016 [2024-05-15 20:29:57.413501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.016 [2024-05-15 20:29:57.413507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.016 [2024-05-15 20:29:57.413522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.016 qpair failed and we were unable to recover it. 00:38:05.016 [2024-05-15 20:29:57.423577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.016 [2024-05-15 20:29:57.423655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.016 [2024-05-15 20:29:57.423673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.016 [2024-05-15 20:29:57.423681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.016 [2024-05-15 20:29:57.423687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.016 [2024-05-15 20:29:57.423702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.016 qpair failed and we were unable to recover it. 00:38:05.016 [2024-05-15 20:29:57.433557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.016 [2024-05-15 20:29:57.433640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.016 [2024-05-15 20:29:57.433658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.016 [2024-05-15 20:29:57.433666] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.016 [2024-05-15 20:29:57.433672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.016 [2024-05-15 20:29:57.433686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.016 qpair failed and we were unable to recover it. 00:38:05.016 [2024-05-15 20:29:57.443602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.016 [2024-05-15 20:29:57.443678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.016 [2024-05-15 20:29:57.443696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.016 [2024-05-15 20:29:57.443703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.016 [2024-05-15 20:29:57.443710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.016 [2024-05-15 20:29:57.443724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.016 qpair failed and we were unable to recover it. 00:38:05.016 [2024-05-15 20:29:57.453608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.016 [2024-05-15 20:29:57.453680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.016 [2024-05-15 20:29:57.453698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.016 [2024-05-15 20:29:57.453705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.016 [2024-05-15 20:29:57.453711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.016 [2024-05-15 20:29:57.453726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.016 qpair failed and we were unable to recover it. 00:38:05.016 [2024-05-15 20:29:57.463643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.016 [2024-05-15 20:29:57.463718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.016 [2024-05-15 20:29:57.463736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.016 [2024-05-15 20:29:57.463743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.016 [2024-05-15 20:29:57.463749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.016 [2024-05-15 20:29:57.463764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.016 qpair failed and we were unable to recover it. 00:38:05.016 [2024-05-15 20:29:57.473684] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.016 [2024-05-15 20:29:57.473765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.016 [2024-05-15 20:29:57.473783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.016 [2024-05-15 20:29:57.473791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.016 [2024-05-15 20:29:57.473798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.016 [2024-05-15 20:29:57.473812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.016 qpair failed and we were unable to recover it. 00:38:05.016 [2024-05-15 20:29:57.483687] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.016 [2024-05-15 20:29:57.483758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.016 [2024-05-15 20:29:57.483782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.016 [2024-05-15 20:29:57.483790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.016 [2024-05-15 20:29:57.483796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.016 [2024-05-15 20:29:57.483811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.016 qpair failed and we were unable to recover it. 00:38:05.016 [2024-05-15 20:29:57.493754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.016 [2024-05-15 20:29:57.493827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.016 [2024-05-15 20:29:57.493844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.016 [2024-05-15 20:29:57.493852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.016 [2024-05-15 20:29:57.493859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.016 [2024-05-15 20:29:57.493873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.016 qpair failed and we were unable to recover it. 00:38:05.016 [2024-05-15 20:29:57.503757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.016 [2024-05-15 20:29:57.503833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.016 [2024-05-15 20:29:57.503850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.016 [2024-05-15 20:29:57.503857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.016 [2024-05-15 20:29:57.503864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.016 [2024-05-15 20:29:57.503878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.016 qpair failed and we were unable to recover it. 00:38:05.016 [2024-05-15 20:29:57.513783] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.016 [2024-05-15 20:29:57.513875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.016 [2024-05-15 20:29:57.513891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.016 [2024-05-15 20:29:57.513899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.016 [2024-05-15 20:29:57.513905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.016 [2024-05-15 20:29:57.513919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.016 qpair failed and we were unable to recover it. 00:38:05.278 [2024-05-15 20:29:57.523814] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.278 [2024-05-15 20:29:57.523927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.278 [2024-05-15 20:29:57.523952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.278 [2024-05-15 20:29:57.523962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.278 [2024-05-15 20:29:57.523969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.278 [2024-05-15 20:29:57.523987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.278 qpair failed and we were unable to recover it. 00:38:05.278 [2024-05-15 20:29:57.533867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.278 [2024-05-15 20:29:57.533946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.278 [2024-05-15 20:29:57.533971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.278 [2024-05-15 20:29:57.533980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.278 [2024-05-15 20:29:57.533986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.278 [2024-05-15 20:29:57.534004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.278 qpair failed and we were unable to recover it. 00:38:05.278 [2024-05-15 20:29:57.543773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.278 [2024-05-15 20:29:57.543849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.278 [2024-05-15 20:29:57.543868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.278 [2024-05-15 20:29:57.543875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.278 [2024-05-15 20:29:57.543881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.278 [2024-05-15 20:29:57.543897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.278 qpair failed and we were unable to recover it. 00:38:05.278 [2024-05-15 20:29:57.553797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.278 [2024-05-15 20:29:57.553874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.278 [2024-05-15 20:29:57.553891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.278 [2024-05-15 20:29:57.553899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.278 [2024-05-15 20:29:57.553905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.278 [2024-05-15 20:29:57.553920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.278 qpair failed and we were unable to recover it. 00:38:05.278 [2024-05-15 20:29:57.563909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.278 [2024-05-15 20:29:57.563980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.278 [2024-05-15 20:29:57.563996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.278 [2024-05-15 20:29:57.564003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.278 [2024-05-15 20:29:57.564009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.278 [2024-05-15 20:29:57.564024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.278 qpair failed and we were unable to recover it. 00:38:05.278 [2024-05-15 20:29:57.573942] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.278 [2024-05-15 20:29:57.574012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.278 [2024-05-15 20:29:57.574034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.278 [2024-05-15 20:29:57.574042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.278 [2024-05-15 20:29:57.574048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.278 [2024-05-15 20:29:57.574063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.278 qpair failed and we were unable to recover it. 00:38:05.278 [2024-05-15 20:29:57.583993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.278 [2024-05-15 20:29:57.584072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.278 [2024-05-15 20:29:57.584088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.278 [2024-05-15 20:29:57.584096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.278 [2024-05-15 20:29:57.584102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.278 [2024-05-15 20:29:57.584117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.278 qpair failed and we were unable to recover it. 00:38:05.278 [2024-05-15 20:29:57.594043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.278 [2024-05-15 20:29:57.594170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.278 [2024-05-15 20:29:57.594195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.278 [2024-05-15 20:29:57.594205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.594212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.594230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.604033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.604104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.604123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.604130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.604137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.604152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.614058] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.614135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.614153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.614161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.614168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.614188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.624079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.624157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.624174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.624181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.624188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.624203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.634114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.634194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.634211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.634218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.634224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.634239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.644148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.644221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.644238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.644246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.644252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.644266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.654162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.654233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.654250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.654257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.654264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.654278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.664197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.664272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.664292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.664299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.664305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.664324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.674222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.674296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.674317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.674325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.674331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.674345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.684249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.684356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.684373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.684381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.684387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.684403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.694267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.694382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.694400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.694407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.694414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.694428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.704304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.704380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.704398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.704405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.704412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.704430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.714346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.714425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.714441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.714449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.714455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.714469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.724352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.724425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.724441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.724449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.724455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.724470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.734395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.734464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.734481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.734488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.734494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.734509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.744524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.744605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.744622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.744629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.744637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.744651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.754499] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.754581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.754602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.754609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.754615] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.754629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.764523] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.764608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.764624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.764631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.764638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.764653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.279 [2024-05-15 20:29:57.774561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.279 [2024-05-15 20:29:57.774637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.279 [2024-05-15 20:29:57.774654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.279 [2024-05-15 20:29:57.774661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.279 [2024-05-15 20:29:57.774668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.279 [2024-05-15 20:29:57.774682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.279 qpair failed and we were unable to recover it. 00:38:05.542 [2024-05-15 20:29:57.784572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.542 [2024-05-15 20:29:57.784656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.542 [2024-05-15 20:29:57.784672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.542 [2024-05-15 20:29:57.784679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.542 [2024-05-15 20:29:57.784685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.542 [2024-05-15 20:29:57.784700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-05-15 20:29:57.794560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.542 [2024-05-15 20:29:57.794639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.542 [2024-05-15 20:29:57.794655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.542 [2024-05-15 20:29:57.794663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.542 [2024-05-15 20:29:57.794670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.542 [2024-05-15 20:29:57.794689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-05-15 20:29:57.804641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.542 [2024-05-15 20:29:57.804720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.542 [2024-05-15 20:29:57.804736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.542 [2024-05-15 20:29:57.804745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.542 [2024-05-15 20:29:57.804752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.542 [2024-05-15 20:29:57.804766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.542 qpair failed and we were unable to recover it. 00:38:05.542 [2024-05-15 20:29:57.814642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.542 [2024-05-15 20:29:57.814716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.542 [2024-05-15 20:29:57.814733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.542 [2024-05-15 20:29:57.814741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.542 [2024-05-15 20:29:57.814748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.543 [2024-05-15 20:29:57.814761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-05-15 20:29:57.824550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.543 [2024-05-15 20:29:57.824624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.543 [2024-05-15 20:29:57.824641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.543 [2024-05-15 20:29:57.824648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.543 [2024-05-15 20:29:57.824655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.543 [2024-05-15 20:29:57.824671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-05-15 20:29:57.834719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.543 [2024-05-15 20:29:57.834800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.543 [2024-05-15 20:29:57.834816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.543 [2024-05-15 20:29:57.834824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.543 [2024-05-15 20:29:57.834831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.543 [2024-05-15 20:29:57.834845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-05-15 20:29:57.844744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.543 [2024-05-15 20:29:57.844829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.543 [2024-05-15 20:29:57.844850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.543 [2024-05-15 20:29:57.844857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.543 [2024-05-15 20:29:57.844863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.543 [2024-05-15 20:29:57.844878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-05-15 20:29:57.854636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.543 [2024-05-15 20:29:57.854712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.543 [2024-05-15 20:29:57.854729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.543 [2024-05-15 20:29:57.854736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.543 [2024-05-15 20:29:57.854743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.543 [2024-05-15 20:29:57.854758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-05-15 20:29:57.864801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.543 [2024-05-15 20:29:57.864879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.543 [2024-05-15 20:29:57.864895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.543 [2024-05-15 20:29:57.864903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.543 [2024-05-15 20:29:57.864910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.543 [2024-05-15 20:29:57.864924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-05-15 20:29:57.874794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.543 [2024-05-15 20:29:57.874876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.543 [2024-05-15 20:29:57.874893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.543 [2024-05-15 20:29:57.874901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.543 [2024-05-15 20:29:57.874907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.543 [2024-05-15 20:29:57.874922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-05-15 20:29:57.884819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.543 [2024-05-15 20:29:57.884891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.543 [2024-05-15 20:29:57.884907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.543 [2024-05-15 20:29:57.884914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.543 [2024-05-15 20:29:57.884925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.543 [2024-05-15 20:29:57.884940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-05-15 20:29:57.894750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.543 [2024-05-15 20:29:57.894855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.543 [2024-05-15 20:29:57.894872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.543 [2024-05-15 20:29:57.894879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.543 [2024-05-15 20:29:57.894886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.543 [2024-05-15 20:29:57.894900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-05-15 20:29:57.904885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.543 [2024-05-15 20:29:57.904966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.543 [2024-05-15 20:29:57.904983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.543 [2024-05-15 20:29:57.904990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.543 [2024-05-15 20:29:57.904997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.543 [2024-05-15 20:29:57.905012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-05-15 20:29:57.914915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.543 [2024-05-15 20:29:57.914995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.543 [2024-05-15 20:29:57.915021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.543 [2024-05-15 20:29:57.915030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.543 [2024-05-15 20:29:57.915037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.543 [2024-05-15 20:29:57.915055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-05-15 20:29:57.924923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.543 [2024-05-15 20:29:57.925003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.543 [2024-05-15 20:29:57.925029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.543 [2024-05-15 20:29:57.925037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.543 [2024-05-15 20:29:57.925044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.543 [2024-05-15 20:29:57.925063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-05-15 20:29:57.934924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.543 [2024-05-15 20:29:57.935006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.543 [2024-05-15 20:29:57.935032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.543 [2024-05-15 20:29:57.935041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.543 [2024-05-15 20:29:57.935048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.543 [2024-05-15 20:29:57.935066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-05-15 20:29:57.944977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.543 [2024-05-15 20:29:57.945053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.543 [2024-05-15 20:29:57.945071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.543 [2024-05-15 20:29:57.945079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.543 [2024-05-15 20:29:57.945086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.543 [2024-05-15 20:29:57.945101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.543 qpair failed and we were unable to recover it. 00:38:05.543 [2024-05-15 20:29:57.955011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.543 [2024-05-15 20:29:57.955098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.543 [2024-05-15 20:29:57.955116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.543 [2024-05-15 20:29:57.955123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.543 [2024-05-15 20:29:57.955130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.544 [2024-05-15 20:29:57.955145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-05-15 20:29:57.965116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.544 [2024-05-15 20:29:57.965229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.544 [2024-05-15 20:29:57.965246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.544 [2024-05-15 20:29:57.965254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.544 [2024-05-15 20:29:57.965260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.544 [2024-05-15 20:29:57.965275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-05-15 20:29:57.975082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.544 [2024-05-15 20:29:57.975155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.544 [2024-05-15 20:29:57.975171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.544 [2024-05-15 20:29:57.975179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.544 [2024-05-15 20:29:57.975190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.544 [2024-05-15 20:29:57.975205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-05-15 20:29:57.985066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.544 [2024-05-15 20:29:57.985141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.544 [2024-05-15 20:29:57.985157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.544 [2024-05-15 20:29:57.985165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.544 [2024-05-15 20:29:57.985171] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.544 [2024-05-15 20:29:57.985186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-05-15 20:29:57.995018] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.544 [2024-05-15 20:29:57.995099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.544 [2024-05-15 20:29:57.995116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.544 [2024-05-15 20:29:57.995124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.544 [2024-05-15 20:29:57.995131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.544 [2024-05-15 20:29:57.995145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-05-15 20:29:58.005134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.544 [2024-05-15 20:29:58.005213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.544 [2024-05-15 20:29:58.005229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.544 [2024-05-15 20:29:58.005236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.544 [2024-05-15 20:29:58.005243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.544 [2024-05-15 20:29:58.005258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-05-15 20:29:58.015162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.544 [2024-05-15 20:29:58.015239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.544 [2024-05-15 20:29:58.015256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.544 [2024-05-15 20:29:58.015264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.544 [2024-05-15 20:29:58.015272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.544 [2024-05-15 20:29:58.015286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-05-15 20:29:58.025208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.544 [2024-05-15 20:29:58.025287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.544 [2024-05-15 20:29:58.025303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.544 [2024-05-15 20:29:58.025311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.544 [2024-05-15 20:29:58.025323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.544 [2024-05-15 20:29:58.025338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.544 [2024-05-15 20:29:58.035224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.544 [2024-05-15 20:29:58.035301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.544 [2024-05-15 20:29:58.035324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.544 [2024-05-15 20:29:58.035332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.544 [2024-05-15 20:29:58.035339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.544 [2024-05-15 20:29:58.035353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.544 qpair failed and we were unable to recover it. 00:38:05.807 [2024-05-15 20:29:58.045257] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.807 [2024-05-15 20:29:58.045335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.807 [2024-05-15 20:29:58.045352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.807 [2024-05-15 20:29:58.045359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.807 [2024-05-15 20:29:58.045367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.807 [2024-05-15 20:29:58.045382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.807 qpair failed and we were unable to recover it. 00:38:05.808 [2024-05-15 20:29:58.055290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.808 [2024-05-15 20:29:58.055371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.808 [2024-05-15 20:29:58.055388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.808 [2024-05-15 20:29:58.055396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.808 [2024-05-15 20:29:58.055404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.808 [2024-05-15 20:29:58.055418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.808 qpair failed and we were unable to recover it. 00:38:05.808 [2024-05-15 20:29:58.065321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.808 [2024-05-15 20:29:58.065400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.808 [2024-05-15 20:29:58.065416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.808 [2024-05-15 20:29:58.065424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.808 [2024-05-15 20:29:58.065435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.808 [2024-05-15 20:29:58.065450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.808 qpair failed and we were unable to recover it. 00:38:05.808 [2024-05-15 20:29:58.075354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.808 [2024-05-15 20:29:58.075437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.808 [2024-05-15 20:29:58.075454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.808 [2024-05-15 20:29:58.075462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.808 [2024-05-15 20:29:58.075468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.808 [2024-05-15 20:29:58.075483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.808 qpair failed and we were unable to recover it. 00:38:05.808 [2024-05-15 20:29:58.085411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.808 [2024-05-15 20:29:58.085488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.808 [2024-05-15 20:29:58.085504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.808 [2024-05-15 20:29:58.085511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.808 [2024-05-15 20:29:58.085518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.808 [2024-05-15 20:29:58.085533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.808 qpair failed and we were unable to recover it. 00:38:05.808 [2024-05-15 20:29:58.095464] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.808 [2024-05-15 20:29:58.095544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.808 [2024-05-15 20:29:58.095560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.808 [2024-05-15 20:29:58.095568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.808 [2024-05-15 20:29:58.095575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.808 [2024-05-15 20:29:58.095589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.808 qpair failed and we were unable to recover it. 00:38:05.808 [2024-05-15 20:29:58.105467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.808 [2024-05-15 20:29:58.105540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.808 [2024-05-15 20:29:58.105557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.808 [2024-05-15 20:29:58.105564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.808 [2024-05-15 20:29:58.105571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.808 [2024-05-15 20:29:58.105587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.808 qpair failed and we were unable to recover it. 00:38:05.808 [2024-05-15 20:29:58.115450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.808 [2024-05-15 20:29:58.115528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.808 [2024-05-15 20:29:58.115545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.808 [2024-05-15 20:29:58.115552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.808 [2024-05-15 20:29:58.115559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.808 [2024-05-15 20:29:58.115574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.808 qpair failed and we were unable to recover it. 00:38:05.808 [2024-05-15 20:29:58.125488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.808 [2024-05-15 20:29:58.125560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.808 [2024-05-15 20:29:58.125577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.808 [2024-05-15 20:29:58.125584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.808 [2024-05-15 20:29:58.125590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.808 [2024-05-15 20:29:58.125606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.808 qpair failed and we were unable to recover it. 00:38:05.808 [2024-05-15 20:29:58.135526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.808 [2024-05-15 20:29:58.135601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.808 [2024-05-15 20:29:58.135617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.808 [2024-05-15 20:29:58.135624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.808 [2024-05-15 20:29:58.135631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.808 [2024-05-15 20:29:58.135645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.808 qpair failed and we were unable to recover it. 00:38:05.808 [2024-05-15 20:29:58.145552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.808 [2024-05-15 20:29:58.145644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.808 [2024-05-15 20:29:58.145661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.808 [2024-05-15 20:29:58.145669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.808 [2024-05-15 20:29:58.145675] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.808 [2024-05-15 20:29:58.145691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.808 qpair failed and we were unable to recover it. 00:38:05.808 [2024-05-15 20:29:58.155576] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.808 [2024-05-15 20:29:58.155656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.808 [2024-05-15 20:29:58.155673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.808 [2024-05-15 20:29:58.155684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.808 [2024-05-15 20:29:58.155690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.809 [2024-05-15 20:29:58.155705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.809 qpair failed and we were unable to recover it. 00:38:05.809 [2024-05-15 20:29:58.165614] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.809 [2024-05-15 20:29:58.165689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.809 [2024-05-15 20:29:58.165705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.809 [2024-05-15 20:29:58.165712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.809 [2024-05-15 20:29:58.165719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.809 [2024-05-15 20:29:58.165734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.809 qpair failed and we were unable to recover it. 00:38:05.809 [2024-05-15 20:29:58.175614] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.809 [2024-05-15 20:29:58.175690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.809 [2024-05-15 20:29:58.175706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.809 [2024-05-15 20:29:58.175714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.809 [2024-05-15 20:29:58.175722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.809 [2024-05-15 20:29:58.175737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.809 qpair failed and we were unable to recover it. 00:38:05.809 [2024-05-15 20:29:58.185667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.809 [2024-05-15 20:29:58.185744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.809 [2024-05-15 20:29:58.185760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.809 [2024-05-15 20:29:58.185767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.809 [2024-05-15 20:29:58.185775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.809 [2024-05-15 20:29:58.185789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.809 qpair failed and we were unable to recover it. 00:38:05.809 [2024-05-15 20:29:58.195683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.809 [2024-05-15 20:29:58.195768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.809 [2024-05-15 20:29:58.195785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.809 [2024-05-15 20:29:58.195793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.809 [2024-05-15 20:29:58.195799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.809 [2024-05-15 20:29:58.195813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.809 qpair failed and we were unable to recover it. 00:38:05.809 [2024-05-15 20:29:58.205727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.809 [2024-05-15 20:29:58.205801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.809 [2024-05-15 20:29:58.205818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.809 [2024-05-15 20:29:58.205825] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.809 [2024-05-15 20:29:58.205832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.809 [2024-05-15 20:29:58.205846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.809 qpair failed and we were unable to recover it. 00:38:05.809 [2024-05-15 20:29:58.215751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.809 [2024-05-15 20:29:58.215821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.809 [2024-05-15 20:29:58.215838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.809 [2024-05-15 20:29:58.215845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.809 [2024-05-15 20:29:58.215852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.809 [2024-05-15 20:29:58.215866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.809 qpair failed and we were unable to recover it. 00:38:05.809 [2024-05-15 20:29:58.225787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.809 [2024-05-15 20:29:58.225862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.809 [2024-05-15 20:29:58.225878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.809 [2024-05-15 20:29:58.225886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.809 [2024-05-15 20:29:58.225893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.809 [2024-05-15 20:29:58.225909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.809 qpair failed and we were unable to recover it. 00:38:05.809 [2024-05-15 20:29:58.235804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.809 [2024-05-15 20:29:58.235923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.809 [2024-05-15 20:29:58.235940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.809 [2024-05-15 20:29:58.235948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.809 [2024-05-15 20:29:58.235955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.809 [2024-05-15 20:29:58.235969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.809 qpair failed and we were unable to recover it. 00:38:05.809 [2024-05-15 20:29:58.245823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.809 [2024-05-15 20:29:58.245897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.809 [2024-05-15 20:29:58.245914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.809 [2024-05-15 20:29:58.245929] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.809 [2024-05-15 20:29:58.245936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.809 [2024-05-15 20:29:58.245951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.809 qpair failed and we were unable to recover it. 00:38:05.809 [2024-05-15 20:29:58.255835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.809 [2024-05-15 20:29:58.255907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.809 [2024-05-15 20:29:58.255925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.809 [2024-05-15 20:29:58.255932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.809 [2024-05-15 20:29:58.255939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.809 [2024-05-15 20:29:58.255953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.809 qpair failed and we were unable to recover it. 00:38:05.809 [2024-05-15 20:29:58.265777] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.809 [2024-05-15 20:29:58.265860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.809 [2024-05-15 20:29:58.265876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.809 [2024-05-15 20:29:58.265884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.809 [2024-05-15 20:29:58.265890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.809 [2024-05-15 20:29:58.265905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.809 qpair failed and we were unable to recover it. 00:38:05.809 [2024-05-15 20:29:58.275934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.809 [2024-05-15 20:29:58.276062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.809 [2024-05-15 20:29:58.276080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.809 [2024-05-15 20:29:58.276087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.809 [2024-05-15 20:29:58.276094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.809 [2024-05-15 20:29:58.276108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.809 qpair failed and we were unable to recover it. 00:38:05.809 [2024-05-15 20:29:58.285914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.809 [2024-05-15 20:29:58.285995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.809 [2024-05-15 20:29:58.286020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.809 [2024-05-15 20:29:58.286030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.809 [2024-05-15 20:29:58.286037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.809 [2024-05-15 20:29:58.286055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.809 qpair failed and we were unable to recover it. 00:38:05.809 [2024-05-15 20:29:58.295927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.809 [2024-05-15 20:29:58.296008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.809 [2024-05-15 20:29:58.296034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.809 [2024-05-15 20:29:58.296043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.810 [2024-05-15 20:29:58.296050] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.810 [2024-05-15 20:29:58.296068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.810 qpair failed and we were unable to recover it. 00:38:05.810 [2024-05-15 20:29:58.305891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:05.810 [2024-05-15 20:29:58.305971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:05.810 [2024-05-15 20:29:58.305997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:05.810 [2024-05-15 20:29:58.306005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:05.810 [2024-05-15 20:29:58.306012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:05.810 [2024-05-15 20:29:58.306030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:05.810 qpair failed and we were unable to recover it. 00:38:06.072 [2024-05-15 20:29:58.316012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.072 [2024-05-15 20:29:58.316104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.072 [2024-05-15 20:29:58.316123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.072 [2024-05-15 20:29:58.316131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.072 [2024-05-15 20:29:58.316137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.072 [2024-05-15 20:29:58.316153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.072 qpair failed and we were unable to recover it. 00:38:06.072 [2024-05-15 20:29:58.326055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.072 [2024-05-15 20:29:58.326133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.072 [2024-05-15 20:29:58.326150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.072 [2024-05-15 20:29:58.326158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.072 [2024-05-15 20:29:58.326165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.073 [2024-05-15 20:29:58.326180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.073 qpair failed and we were unable to recover it. 00:38:06.073 [2024-05-15 20:29:58.336075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.073 [2024-05-15 20:29:58.336148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.073 [2024-05-15 20:29:58.336165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.073 [2024-05-15 20:29:58.336178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.073 [2024-05-15 20:29:58.336185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.073 [2024-05-15 20:29:58.336201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.073 qpair failed and we were unable to recover it. 00:38:06.073 [2024-05-15 20:29:58.346149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.073 [2024-05-15 20:29:58.346222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.073 [2024-05-15 20:29:58.346239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.073 [2024-05-15 20:29:58.346246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.073 [2024-05-15 20:29:58.346253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.073 [2024-05-15 20:29:58.346267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.073 qpair failed and we were unable to recover it. 00:38:06.073 [2024-05-15 20:29:58.356121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.073 [2024-05-15 20:29:58.356203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.073 [2024-05-15 20:29:58.356219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.073 [2024-05-15 20:29:58.356227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.073 [2024-05-15 20:29:58.356234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.073 [2024-05-15 20:29:58.356248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.073 qpair failed and we were unable to recover it. 00:38:06.073 [2024-05-15 20:29:58.366089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.073 [2024-05-15 20:29:58.366180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.073 [2024-05-15 20:29:58.366197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.073 [2024-05-15 20:29:58.366204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.073 [2024-05-15 20:29:58.366211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.073 [2024-05-15 20:29:58.366226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.073 qpair failed and we were unable to recover it. 00:38:06.073 [2024-05-15 20:29:58.376087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.073 [2024-05-15 20:29:58.376160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.073 [2024-05-15 20:29:58.376177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.073 [2024-05-15 20:29:58.376185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.073 [2024-05-15 20:29:58.376192] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.073 [2024-05-15 20:29:58.376206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.073 qpair failed and we were unable to recover it. 00:38:06.073 [2024-05-15 20:29:58.386268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.073 [2024-05-15 20:29:58.386382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.073 [2024-05-15 20:29:58.386399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.073 [2024-05-15 20:29:58.386406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.073 [2024-05-15 20:29:58.386413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.073 [2024-05-15 20:29:58.386429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.073 qpair failed and we were unable to recover it. 00:38:06.073 [2024-05-15 20:29:58.396212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.073 [2024-05-15 20:29:58.396287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.073 [2024-05-15 20:29:58.396303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.073 [2024-05-15 20:29:58.396311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.073 [2024-05-15 20:29:58.396323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.073 [2024-05-15 20:29:58.396338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.073 qpair failed and we were unable to recover it. 00:38:06.073 [2024-05-15 20:29:58.406298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.073 [2024-05-15 20:29:58.406376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.073 [2024-05-15 20:29:58.406393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.073 [2024-05-15 20:29:58.406400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.073 [2024-05-15 20:29:58.406406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.073 [2024-05-15 20:29:58.406423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.073 qpair failed and we were unable to recover it. 00:38:06.073 [2024-05-15 20:29:58.416323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.073 [2024-05-15 20:29:58.416400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.073 [2024-05-15 20:29:58.416416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.073 [2024-05-15 20:29:58.416424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.073 [2024-05-15 20:29:58.416430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.073 [2024-05-15 20:29:58.416445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.073 qpair failed and we were unable to recover it. 00:38:06.073 [2024-05-15 20:29:58.426345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.073 [2024-05-15 20:29:58.426420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.073 [2024-05-15 20:29:58.426441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.073 [2024-05-15 20:29:58.426449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.073 [2024-05-15 20:29:58.426455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.073 [2024-05-15 20:29:58.426470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.073 qpair failed and we were unable to recover it. 00:38:06.073 [2024-05-15 20:29:58.436389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.073 [2024-05-15 20:29:58.436487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.073 [2024-05-15 20:29:58.436503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.073 [2024-05-15 20:29:58.436511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.073 [2024-05-15 20:29:58.436517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.073 [2024-05-15 20:29:58.436533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.073 qpair failed and we were unable to recover it. 00:38:06.073 [2024-05-15 20:29:58.446379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.073 [2024-05-15 20:29:58.446449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.073 [2024-05-15 20:29:58.446466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.073 [2024-05-15 20:29:58.446474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.073 [2024-05-15 20:29:58.446480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.073 [2024-05-15 20:29:58.446495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.073 qpair failed and we were unable to recover it. 00:38:06.073 [2024-05-15 20:29:58.456443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.073 [2024-05-15 20:29:58.456530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.073 [2024-05-15 20:29:58.456547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.073 [2024-05-15 20:29:58.456554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.073 [2024-05-15 20:29:58.456560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.073 [2024-05-15 20:29:58.456576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.073 qpair failed and we were unable to recover it. 00:38:06.073 [2024-05-15 20:29:58.466450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.073 [2024-05-15 20:29:58.466522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.073 [2024-05-15 20:29:58.466539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.074 [2024-05-15 20:29:58.466546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.074 [2024-05-15 20:29:58.466552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.074 [2024-05-15 20:29:58.466567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.074 qpair failed and we were unable to recover it. 00:38:06.074 [2024-05-15 20:29:58.476443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.074 [2024-05-15 20:29:58.476520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.074 [2024-05-15 20:29:58.476536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.074 [2024-05-15 20:29:58.476544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.074 [2024-05-15 20:29:58.476551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.074 [2024-05-15 20:29:58.476565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.074 qpair failed and we were unable to recover it. 00:38:06.074 [2024-05-15 20:29:58.486499] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.074 [2024-05-15 20:29:58.486572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.074 [2024-05-15 20:29:58.486589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.074 [2024-05-15 20:29:58.486597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.074 [2024-05-15 20:29:58.486604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.074 [2024-05-15 20:29:58.486618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.074 qpair failed and we were unable to recover it. 00:38:06.074 [2024-05-15 20:29:58.496435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.074 [2024-05-15 20:29:58.496510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.074 [2024-05-15 20:29:58.496526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.074 [2024-05-15 20:29:58.496533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.074 [2024-05-15 20:29:58.496540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.074 [2024-05-15 20:29:58.496555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.074 qpair failed and we were unable to recover it. 00:38:06.074 [2024-05-15 20:29:58.506592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.074 [2024-05-15 20:29:58.506674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.074 [2024-05-15 20:29:58.506690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.074 [2024-05-15 20:29:58.506700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.074 [2024-05-15 20:29:58.506706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.074 [2024-05-15 20:29:58.506720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.074 qpair failed and we were unable to recover it. 00:38:06.074 [2024-05-15 20:29:58.516622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.074 [2024-05-15 20:29:58.516700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.074 [2024-05-15 20:29:58.516721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.074 [2024-05-15 20:29:58.516728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.074 [2024-05-15 20:29:58.516735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.074 [2024-05-15 20:29:58.516749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.074 qpair failed and we were unable to recover it. 00:38:06.074 [2024-05-15 20:29:58.526615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.074 [2024-05-15 20:29:58.526717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.074 [2024-05-15 20:29:58.526734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.074 [2024-05-15 20:29:58.526742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.074 [2024-05-15 20:29:58.526749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.074 [2024-05-15 20:29:58.526763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.074 qpair failed and we were unable to recover it. 00:38:06.074 [2024-05-15 20:29:58.536645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.074 [2024-05-15 20:29:58.536717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.074 [2024-05-15 20:29:58.536734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.074 [2024-05-15 20:29:58.536741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.074 [2024-05-15 20:29:58.536747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.074 [2024-05-15 20:29:58.536763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.074 qpair failed and we were unable to recover it. 00:38:06.074 [2024-05-15 20:29:58.546663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.074 [2024-05-15 20:29:58.546736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.074 [2024-05-15 20:29:58.546752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.074 [2024-05-15 20:29:58.546760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.074 [2024-05-15 20:29:58.546766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.074 [2024-05-15 20:29:58.546781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.074 qpair failed and we were unable to recover it. 00:38:06.074 [2024-05-15 20:29:58.556693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.074 [2024-05-15 20:29:58.556771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.074 [2024-05-15 20:29:58.556788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.074 [2024-05-15 20:29:58.556795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.074 [2024-05-15 20:29:58.556803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.074 [2024-05-15 20:29:58.556821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.074 qpair failed and we were unable to recover it. 00:38:06.074 [2024-05-15 20:29:58.566731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.074 [2024-05-15 20:29:58.566804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.074 [2024-05-15 20:29:58.566820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.074 [2024-05-15 20:29:58.566827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.074 [2024-05-15 20:29:58.566834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.074 [2024-05-15 20:29:58.566848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.074 qpair failed and we were unable to recover it. 00:38:06.338 [2024-05-15 20:29:58.576807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.338 [2024-05-15 20:29:58.576886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.338 [2024-05-15 20:29:58.576903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.338 [2024-05-15 20:29:58.576911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.338 [2024-05-15 20:29:58.576918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.338 [2024-05-15 20:29:58.576933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.338 qpair failed and we were unable to recover it. 00:38:06.338 [2024-05-15 20:29:58.586779] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.338 [2024-05-15 20:29:58.586856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.338 [2024-05-15 20:29:58.586872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.338 [2024-05-15 20:29:58.586879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.338 [2024-05-15 20:29:58.586886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.338 [2024-05-15 20:29:58.586901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.338 qpair failed and we were unable to recover it. 00:38:06.338 [2024-05-15 20:29:58.596825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.338 [2024-05-15 20:29:58.596904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.338 [2024-05-15 20:29:58.596921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.338 [2024-05-15 20:29:58.596928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.338 [2024-05-15 20:29:58.596936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.338 [2024-05-15 20:29:58.596950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.338 qpair failed and we were unable to recover it. 00:38:06.338 [2024-05-15 20:29:58.606820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.338 [2024-05-15 20:29:58.606896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.338 [2024-05-15 20:29:58.606917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.338 [2024-05-15 20:29:58.606925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.338 [2024-05-15 20:29:58.606931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.338 [2024-05-15 20:29:58.606946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.338 qpair failed and we were unable to recover it. 00:38:06.338 [2024-05-15 20:29:58.616823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.338 [2024-05-15 20:29:58.616930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.338 [2024-05-15 20:29:58.616947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.338 [2024-05-15 20:29:58.616954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.338 [2024-05-15 20:29:58.616960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.338 [2024-05-15 20:29:58.616974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.338 qpair failed and we were unable to recover it. 00:38:06.338 [2024-05-15 20:29:58.626952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.338 [2024-05-15 20:29:58.627037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.338 [2024-05-15 20:29:58.627063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.338 [2024-05-15 20:29:58.627071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.338 [2024-05-15 20:29:58.627078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.338 [2024-05-15 20:29:58.627097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.338 qpair failed and we were unable to recover it. 00:38:06.338 [2024-05-15 20:29:58.636966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.338 [2024-05-15 20:29:58.637059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.338 [2024-05-15 20:29:58.637077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.338 [2024-05-15 20:29:58.637085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.338 [2024-05-15 20:29:58.637092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.338 [2024-05-15 20:29:58.637108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.338 qpair failed and we were unable to recover it. 00:38:06.338 [2024-05-15 20:29:58.646974] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.338 [2024-05-15 20:29:58.647046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.338 [2024-05-15 20:29:58.647063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.338 [2024-05-15 20:29:58.647071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.338 [2024-05-15 20:29:58.647077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.338 [2024-05-15 20:29:58.647097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.338 qpair failed and we were unable to recover it. 00:38:06.338 [2024-05-15 20:29:58.656963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.338 [2024-05-15 20:29:58.657042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.338 [2024-05-15 20:29:58.657059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.338 [2024-05-15 20:29:58.657067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.338 [2024-05-15 20:29:58.657073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.338 [2024-05-15 20:29:58.657088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.338 qpair failed and we were unable to recover it. 00:38:06.338 [2024-05-15 20:29:58.667016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.338 [2024-05-15 20:29:58.667095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.338 [2024-05-15 20:29:58.667112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.338 [2024-05-15 20:29:58.667119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.338 [2024-05-15 20:29:58.667126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.338 [2024-05-15 20:29:58.667141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.338 qpair failed and we were unable to recover it. 00:38:06.338 [2024-05-15 20:29:58.677037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.338 [2024-05-15 20:29:58.677116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.338 [2024-05-15 20:29:58.677133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.338 [2024-05-15 20:29:58.677140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.338 [2024-05-15 20:29:58.677147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.338 [2024-05-15 20:29:58.677162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.338 qpair failed and we were unable to recover it. 00:38:06.338 [2024-05-15 20:29:58.687045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.338 [2024-05-15 20:29:58.687119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.338 [2024-05-15 20:29:58.687136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.338 [2024-05-15 20:29:58.687143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.338 [2024-05-15 20:29:58.687150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.338 [2024-05-15 20:29:58.687164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.338 qpair failed and we were unable to recover it. 00:38:06.338 [2024-05-15 20:29:58.697093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.339 [2024-05-15 20:29:58.697166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.339 [2024-05-15 20:29:58.697186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.339 [2024-05-15 20:29:58.697194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.339 [2024-05-15 20:29:58.697201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.339 [2024-05-15 20:29:58.697216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.339 qpair failed and we were unable to recover it. 00:38:06.339 [2024-05-15 20:29:58.707153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.339 [2024-05-15 20:29:58.707233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.339 [2024-05-15 20:29:58.707250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.339 [2024-05-15 20:29:58.707257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.339 [2024-05-15 20:29:58.707264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.339 [2024-05-15 20:29:58.707278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.339 qpair failed and we were unable to recover it. 00:38:06.339 [2024-05-15 20:29:58.717132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.339 [2024-05-15 20:29:58.717210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.339 [2024-05-15 20:29:58.717227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.339 [2024-05-15 20:29:58.717235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.339 [2024-05-15 20:29:58.717242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.339 [2024-05-15 20:29:58.717256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.339 qpair failed and we were unable to recover it. 00:38:06.339 [2024-05-15 20:29:58.727182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.339 [2024-05-15 20:29:58.727257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.339 [2024-05-15 20:29:58.727274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.339 [2024-05-15 20:29:58.727281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.339 [2024-05-15 20:29:58.727288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.339 [2024-05-15 20:29:58.727303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.339 qpair failed and we were unable to recover it. 00:38:06.339 [2024-05-15 20:29:58.737195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.339 [2024-05-15 20:29:58.737299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.339 [2024-05-15 20:29:58.737321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.339 [2024-05-15 20:29:58.737329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.339 [2024-05-15 20:29:58.737335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.339 [2024-05-15 20:29:58.737354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.339 qpair failed and we were unable to recover it. 00:38:06.339 [2024-05-15 20:29:58.747117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.339 [2024-05-15 20:29:58.747196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.339 [2024-05-15 20:29:58.747213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.339 [2024-05-15 20:29:58.747220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.339 [2024-05-15 20:29:58.747227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.339 [2024-05-15 20:29:58.747242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.339 qpair failed and we were unable to recover it. 00:38:06.339 [2024-05-15 20:29:58.757241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.339 [2024-05-15 20:29:58.757326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.339 [2024-05-15 20:29:58.757343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.339 [2024-05-15 20:29:58.757352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.339 [2024-05-15 20:29:58.757359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.339 [2024-05-15 20:29:58.757373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.339 qpair failed and we were unable to recover it. 00:38:06.339 [2024-05-15 20:29:58.767300] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.339 [2024-05-15 20:29:58.767463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.339 [2024-05-15 20:29:58.767480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.339 [2024-05-15 20:29:58.767488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.339 [2024-05-15 20:29:58.767494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.339 [2024-05-15 20:29:58.767508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.339 qpair failed and we were unable to recover it. 00:38:06.339 [2024-05-15 20:29:58.777338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.339 [2024-05-15 20:29:58.777408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.339 [2024-05-15 20:29:58.777424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.339 [2024-05-15 20:29:58.777432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.339 [2024-05-15 20:29:58.777439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.339 [2024-05-15 20:29:58.777454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.339 qpair failed and we were unable to recover it. 00:38:06.339 [2024-05-15 20:29:58.787370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.339 [2024-05-15 20:29:58.787481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.339 [2024-05-15 20:29:58.787502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.339 [2024-05-15 20:29:58.787510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.339 [2024-05-15 20:29:58.787516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.339 [2024-05-15 20:29:58.787530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.339 qpair failed and we were unable to recover it. 00:38:06.339 [2024-05-15 20:29:58.797260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.339 [2024-05-15 20:29:58.797343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.339 [2024-05-15 20:29:58.797360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.339 [2024-05-15 20:29:58.797367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.339 [2024-05-15 20:29:58.797374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.339 [2024-05-15 20:29:58.797388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.339 qpair failed and we were unable to recover it. 00:38:06.339 [2024-05-15 20:29:58.807386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.339 [2024-05-15 20:29:58.807457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.339 [2024-05-15 20:29:58.807473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.339 [2024-05-15 20:29:58.807481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.339 [2024-05-15 20:29:58.807487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.339 [2024-05-15 20:29:58.807502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.339 qpair failed and we were unable to recover it. 00:38:06.339 [2024-05-15 20:29:58.817452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.339 [2024-05-15 20:29:58.817528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.339 [2024-05-15 20:29:58.817545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.339 [2024-05-15 20:29:58.817552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.339 [2024-05-15 20:29:58.817559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.339 [2024-05-15 20:29:58.817574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.339 qpair failed and we were unable to recover it. 00:38:06.339 [2024-05-15 20:29:58.827452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.339 [2024-05-15 20:29:58.827528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.339 [2024-05-15 20:29:58.827545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.339 [2024-05-15 20:29:58.827552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.339 [2024-05-15 20:29:58.827563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.339 [2024-05-15 20:29:58.827578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.339 qpair failed and we were unable to recover it. 00:38:06.602 [2024-05-15 20:29:58.837481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.602 [2024-05-15 20:29:58.837561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.602 [2024-05-15 20:29:58.837578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.602 [2024-05-15 20:29:58.837586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.602 [2024-05-15 20:29:58.837592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.602 [2024-05-15 20:29:58.837608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.602 qpair failed and we were unable to recover it. 00:38:06.602 [2024-05-15 20:29:58.847513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.602 [2024-05-15 20:29:58.847588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.602 [2024-05-15 20:29:58.847604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.602 [2024-05-15 20:29:58.847612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.602 [2024-05-15 20:29:58.847618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.602 [2024-05-15 20:29:58.847633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.602 qpair failed and we were unable to recover it. 00:38:06.602 [2024-05-15 20:29:58.857635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.602 [2024-05-15 20:29:58.857708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.602 [2024-05-15 20:29:58.857725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.602 [2024-05-15 20:29:58.857732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.602 [2024-05-15 20:29:58.857739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.602 [2024-05-15 20:29:58.857754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.602 qpair failed and we were unable to recover it. 00:38:06.602 [2024-05-15 20:29:58.867593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.602 [2024-05-15 20:29:58.867664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.602 [2024-05-15 20:29:58.867680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.602 [2024-05-15 20:29:58.867687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.602 [2024-05-15 20:29:58.867693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.602 [2024-05-15 20:29:58.867709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.602 qpair failed and we were unable to recover it. 00:38:06.602 [2024-05-15 20:29:58.877589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.602 [2024-05-15 20:29:58.877668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.602 [2024-05-15 20:29:58.877685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.602 [2024-05-15 20:29:58.877693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.602 [2024-05-15 20:29:58.877699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.602 [2024-05-15 20:29:58.877713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.602 qpair failed and we were unable to recover it. 00:38:06.602 [2024-05-15 20:29:58.887645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.602 [2024-05-15 20:29:58.887718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.602 [2024-05-15 20:29:58.887734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.602 [2024-05-15 20:29:58.887741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.602 [2024-05-15 20:29:58.887748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.602 [2024-05-15 20:29:58.887763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.602 qpair failed and we were unable to recover it. 00:38:06.602 [2024-05-15 20:29:58.897641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.602 [2024-05-15 20:29:58.897721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.602 [2024-05-15 20:29:58.897737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.602 [2024-05-15 20:29:58.897745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.602 [2024-05-15 20:29:58.897752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.602 [2024-05-15 20:29:58.897766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.602 qpair failed and we were unable to recover it. 00:38:06.602 [2024-05-15 20:29:58.907689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.602 [2024-05-15 20:29:58.907764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.602 [2024-05-15 20:29:58.907780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.602 [2024-05-15 20:29:58.907787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.602 [2024-05-15 20:29:58.907795] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.602 [2024-05-15 20:29:58.907810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.602 qpair failed and we were unable to recover it. 00:38:06.602 [2024-05-15 20:29:58.917600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.602 [2024-05-15 20:29:58.917674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.602 [2024-05-15 20:29:58.917690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.602 [2024-05-15 20:29:58.917698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.602 [2024-05-15 20:29:58.917709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.602 [2024-05-15 20:29:58.917724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.602 qpair failed and we were unable to recover it. 00:38:06.602 [2024-05-15 20:29:58.927699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.602 [2024-05-15 20:29:58.927771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.602 [2024-05-15 20:29:58.927788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.602 [2024-05-15 20:29:58.927795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.602 [2024-05-15 20:29:58.927803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.602 [2024-05-15 20:29:58.927817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.602 qpair failed and we were unable to recover it. 00:38:06.602 [2024-05-15 20:29:58.937651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.602 [2024-05-15 20:29:58.937726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.602 [2024-05-15 20:29:58.937742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.602 [2024-05-15 20:29:58.937749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.602 [2024-05-15 20:29:58.937756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.602 [2024-05-15 20:29:58.937770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.602 qpair failed and we were unable to recover it. 00:38:06.602 [2024-05-15 20:29:58.947864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.602 [2024-05-15 20:29:58.947975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.602 [2024-05-15 20:29:58.947993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.602 [2024-05-15 20:29:58.948000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.602 [2024-05-15 20:29:58.948006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.602 [2024-05-15 20:29:58.948021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.602 qpair failed and we were unable to recover it. 00:38:06.602 [2024-05-15 20:29:58.957828] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.603 [2024-05-15 20:29:58.957936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.603 [2024-05-15 20:29:58.957953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.603 [2024-05-15 20:29:58.957960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.603 [2024-05-15 20:29:58.957967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.603 [2024-05-15 20:29:58.957981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.603 qpair failed and we were unable to recover it. 00:38:06.603 [2024-05-15 20:29:58.967893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.603 [2024-05-15 20:29:58.968013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.603 [2024-05-15 20:29:58.968030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.603 [2024-05-15 20:29:58.968037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.603 [2024-05-15 20:29:58.968043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.603 [2024-05-15 20:29:58.968058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.603 qpair failed and we were unable to recover it. 00:38:06.603 [2024-05-15 20:29:58.977856] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.603 [2024-05-15 20:29:58.977927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.603 [2024-05-15 20:29:58.977943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.603 [2024-05-15 20:29:58.977950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.603 [2024-05-15 20:29:58.977957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.603 [2024-05-15 20:29:58.977971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.603 qpair failed and we were unable to recover it. 00:38:06.603 [2024-05-15 20:29:58.987880] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.603 [2024-05-15 20:29:58.987954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.603 [2024-05-15 20:29:58.987970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.603 [2024-05-15 20:29:58.987978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.603 [2024-05-15 20:29:58.987984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.603 [2024-05-15 20:29:58.987999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.603 qpair failed and we were unable to recover it. 00:38:06.603 [2024-05-15 20:29:58.997911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.603 [2024-05-15 20:29:58.997986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.603 [2024-05-15 20:29:58.998003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.603 [2024-05-15 20:29:58.998010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.603 [2024-05-15 20:29:58.998017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.603 [2024-05-15 20:29:58.998031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.603 qpair failed and we were unable to recover it. 00:38:06.603 [2024-05-15 20:29:59.007954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.603 [2024-05-15 20:29:59.008028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.603 [2024-05-15 20:29:59.008045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.603 [2024-05-15 20:29:59.008052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.603 [2024-05-15 20:29:59.008066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.603 [2024-05-15 20:29:59.008081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.603 qpair failed and we were unable to recover it. 00:38:06.603 [2024-05-15 20:29:59.017963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.603 [2024-05-15 20:29:59.018072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.603 [2024-05-15 20:29:59.018089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.603 [2024-05-15 20:29:59.018097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.603 [2024-05-15 20:29:59.018103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.603 [2024-05-15 20:29:59.018118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.603 qpair failed and we were unable to recover it. 00:38:06.603 [2024-05-15 20:29:59.027995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.603 [2024-05-15 20:29:59.028077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.603 [2024-05-15 20:29:59.028103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.603 [2024-05-15 20:29:59.028112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.603 [2024-05-15 20:29:59.028119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.603 [2024-05-15 20:29:59.028138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.603 qpair failed and we were unable to recover it. 00:38:06.603 [2024-05-15 20:29:59.038024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.603 [2024-05-15 20:29:59.038103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.603 [2024-05-15 20:29:59.038121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.603 [2024-05-15 20:29:59.038130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.603 [2024-05-15 20:29:59.038137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.603 [2024-05-15 20:29:59.038153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.603 qpair failed and we were unable to recover it. 00:38:06.603 [2024-05-15 20:29:59.047952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.603 [2024-05-15 20:29:59.048036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.603 [2024-05-15 20:29:59.048062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.603 [2024-05-15 20:29:59.048071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.603 [2024-05-15 20:29:59.048078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.603 [2024-05-15 20:29:59.048096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.603 qpair failed and we were unable to recover it. 00:38:06.603 [2024-05-15 20:29:59.058077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.603 [2024-05-15 20:29:59.058154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.603 [2024-05-15 20:29:59.058172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.603 [2024-05-15 20:29:59.058180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.603 [2024-05-15 20:29:59.058187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.603 [2024-05-15 20:29:59.058202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.603 qpair failed and we were unable to recover it. 00:38:06.603 [2024-05-15 20:29:59.068119] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.603 [2024-05-15 20:29:59.068197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.603 [2024-05-15 20:29:59.068213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.603 [2024-05-15 20:29:59.068220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.603 [2024-05-15 20:29:59.068227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.603 [2024-05-15 20:29:59.068242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.603 qpair failed and we were unable to recover it. 00:38:06.603 [2024-05-15 20:29:59.078169] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.603 [2024-05-15 20:29:59.078243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.603 [2024-05-15 20:29:59.078260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.603 [2024-05-15 20:29:59.078267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.603 [2024-05-15 20:29:59.078274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.603 [2024-05-15 20:29:59.078290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.603 qpair failed and we were unable to recover it. 00:38:06.603 [2024-05-15 20:29:59.088153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.603 [2024-05-15 20:29:59.088273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.603 [2024-05-15 20:29:59.088290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.603 [2024-05-15 20:29:59.088297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.603 [2024-05-15 20:29:59.088304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.603 [2024-05-15 20:29:59.088325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.603 qpair failed and we were unable to recover it. 00:38:06.603 [2024-05-15 20:29:59.098095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.603 [2024-05-15 20:29:59.098170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.603 [2024-05-15 20:29:59.098187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.603 [2024-05-15 20:29:59.098199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.603 [2024-05-15 20:29:59.098205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.603 [2024-05-15 20:29:59.098220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.603 qpair failed and we were unable to recover it. 00:38:06.865 [2024-05-15 20:29:59.108134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.865 [2024-05-15 20:29:59.108207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.865 [2024-05-15 20:29:59.108224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.865 [2024-05-15 20:29:59.108232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.865 [2024-05-15 20:29:59.108239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.865 [2024-05-15 20:29:59.108254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-05-15 20:29:59.118158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.865 [2024-05-15 20:29:59.118232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.865 [2024-05-15 20:29:59.118249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.865 [2024-05-15 20:29:59.118257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.865 [2024-05-15 20:29:59.118263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.865 [2024-05-15 20:29:59.118279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-05-15 20:29:59.128273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.865 [2024-05-15 20:29:59.128356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.865 [2024-05-15 20:29:59.128373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.865 [2024-05-15 20:29:59.128381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.865 [2024-05-15 20:29:59.128387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.865 [2024-05-15 20:29:59.128402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-05-15 20:29:59.138202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.865 [2024-05-15 20:29:59.138276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.865 [2024-05-15 20:29:59.138291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.865 [2024-05-15 20:29:59.138299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.865 [2024-05-15 20:29:59.138306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.865 [2024-05-15 20:29:59.138325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.865 qpair failed and we were unable to recover it. 00:38:06.865 [2024-05-15 20:29:59.148350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.865 [2024-05-15 20:29:59.148422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.865 [2024-05-15 20:29:59.148439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.865 [2024-05-15 20:29:59.148446] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.866 [2024-05-15 20:29:59.148453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.866 [2024-05-15 20:29:59.148468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-05-15 20:29:59.158403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.866 [2024-05-15 20:29:59.158479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.866 [2024-05-15 20:29:59.158495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.866 [2024-05-15 20:29:59.158503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.866 [2024-05-15 20:29:59.158510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.866 [2024-05-15 20:29:59.158524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-05-15 20:29:59.168382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.866 [2024-05-15 20:29:59.168455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.866 [2024-05-15 20:29:59.168471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.866 [2024-05-15 20:29:59.168479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.866 [2024-05-15 20:29:59.168485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.866 [2024-05-15 20:29:59.168499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-05-15 20:29:59.178472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.866 [2024-05-15 20:29:59.178567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.866 [2024-05-15 20:29:59.178584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.866 [2024-05-15 20:29:59.178592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.866 [2024-05-15 20:29:59.178598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.866 [2024-05-15 20:29:59.178613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-05-15 20:29:59.188475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.866 [2024-05-15 20:29:59.188547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.866 [2024-05-15 20:29:59.188564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.866 [2024-05-15 20:29:59.188575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.866 [2024-05-15 20:29:59.188582] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.866 [2024-05-15 20:29:59.188597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-05-15 20:29:59.198483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.866 [2024-05-15 20:29:59.198558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.866 [2024-05-15 20:29:59.198575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.866 [2024-05-15 20:29:59.198582] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.866 [2024-05-15 20:29:59.198589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.866 [2024-05-15 20:29:59.198604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-05-15 20:29:59.208506] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.866 [2024-05-15 20:29:59.208578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.866 [2024-05-15 20:29:59.208595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.866 [2024-05-15 20:29:59.208603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.866 [2024-05-15 20:29:59.208610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.866 [2024-05-15 20:29:59.208624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-05-15 20:29:59.218598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.866 [2024-05-15 20:29:59.218677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.866 [2024-05-15 20:29:59.218693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.866 [2024-05-15 20:29:59.218701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.866 [2024-05-15 20:29:59.218708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.866 [2024-05-15 20:29:59.218723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-05-15 20:29:59.228587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.866 [2024-05-15 20:29:59.228661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.866 [2024-05-15 20:29:59.228677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.866 [2024-05-15 20:29:59.228684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.866 [2024-05-15 20:29:59.228691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.866 [2024-05-15 20:29:59.228705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-05-15 20:29:59.238490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.866 [2024-05-15 20:29:59.238573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.866 [2024-05-15 20:29:59.238590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.866 [2024-05-15 20:29:59.238598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.866 [2024-05-15 20:29:59.238604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.866 [2024-05-15 20:29:59.238619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-05-15 20:29:59.248532] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.866 [2024-05-15 20:29:59.248602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.866 [2024-05-15 20:29:59.248619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.866 [2024-05-15 20:29:59.248627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.866 [2024-05-15 20:29:59.248634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.866 [2024-05-15 20:29:59.248648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-05-15 20:29:59.258643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.866 [2024-05-15 20:29:59.258716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.866 [2024-05-15 20:29:59.258732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.866 [2024-05-15 20:29:59.258740] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.866 [2024-05-15 20:29:59.258747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.866 [2024-05-15 20:29:59.258762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.866 [2024-05-15 20:29:59.268752] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.866 [2024-05-15 20:29:59.268868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.866 [2024-05-15 20:29:59.268885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.866 [2024-05-15 20:29:59.268893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.866 [2024-05-15 20:29:59.268899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.866 [2024-05-15 20:29:59.268914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.866 qpair failed and we were unable to recover it. 00:38:06.867 [2024-05-15 20:29:59.278705] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.867 [2024-05-15 20:29:59.278782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.867 [2024-05-15 20:29:59.278799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.867 [2024-05-15 20:29:59.278810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.867 [2024-05-15 20:29:59.278818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.867 [2024-05-15 20:29:59.278832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-05-15 20:29:59.288726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.867 [2024-05-15 20:29:59.288796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.867 [2024-05-15 20:29:59.288813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.867 [2024-05-15 20:29:59.288822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.867 [2024-05-15 20:29:59.288829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.867 [2024-05-15 20:29:59.288844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-05-15 20:29:59.298763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.867 [2024-05-15 20:29:59.298834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.867 [2024-05-15 20:29:59.298851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.867 [2024-05-15 20:29:59.298859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.867 [2024-05-15 20:29:59.298865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.867 [2024-05-15 20:29:59.298881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-05-15 20:29:59.308782] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.867 [2024-05-15 20:29:59.308853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.867 [2024-05-15 20:29:59.308870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.867 [2024-05-15 20:29:59.308878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.867 [2024-05-15 20:29:59.308884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.867 [2024-05-15 20:29:59.308899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-05-15 20:29:59.318808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.867 [2024-05-15 20:29:59.318886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.867 [2024-05-15 20:29:59.318903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.867 [2024-05-15 20:29:59.318910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.867 [2024-05-15 20:29:59.318918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.867 [2024-05-15 20:29:59.318932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-05-15 20:29:59.328837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.867 [2024-05-15 20:29:59.328910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.867 [2024-05-15 20:29:59.328927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.867 [2024-05-15 20:29:59.328935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.867 [2024-05-15 20:29:59.328942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.867 [2024-05-15 20:29:59.328956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-05-15 20:29:59.338858] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.867 [2024-05-15 20:29:59.338928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.867 [2024-05-15 20:29:59.338945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.867 [2024-05-15 20:29:59.338953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.867 [2024-05-15 20:29:59.338959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.867 [2024-05-15 20:29:59.338974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-05-15 20:29:59.348898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.867 [2024-05-15 20:29:59.348970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.867 [2024-05-15 20:29:59.348987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.867 [2024-05-15 20:29:59.348994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.867 [2024-05-15 20:29:59.349001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.867 [2024-05-15 20:29:59.349015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.867 qpair failed and we were unable to recover it. 00:38:06.867 [2024-05-15 20:29:59.358925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:06.867 [2024-05-15 20:29:59.359005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:06.867 [2024-05-15 20:29:59.359031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:06.867 [2024-05-15 20:29:59.359041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:06.867 [2024-05-15 20:29:59.359048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:06.867 [2024-05-15 20:29:59.359067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:06.867 qpair failed and we were unable to recover it. 00:38:07.130 [2024-05-15 20:29:59.368992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.130 [2024-05-15 20:29:59.369071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.130 [2024-05-15 20:29:59.369101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.130 [2024-05-15 20:29:59.369111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.130 [2024-05-15 20:29:59.369118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.130 [2024-05-15 20:29:59.369136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.130 qpair failed and we were unable to recover it. 00:38:07.130 [2024-05-15 20:29:59.378986] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.130 [2024-05-15 20:29:59.379057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.130 [2024-05-15 20:29:59.379075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.130 [2024-05-15 20:29:59.379082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.130 [2024-05-15 20:29:59.379089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.130 [2024-05-15 20:29:59.379105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.130 qpair failed and we were unable to recover it. 00:38:07.130 [2024-05-15 20:29:59.388922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.130 [2024-05-15 20:29:59.389001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.130 [2024-05-15 20:29:59.389018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.130 [2024-05-15 20:29:59.389026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.131 [2024-05-15 20:29:59.389033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.131 [2024-05-15 20:29:59.389049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.131 qpair failed and we were unable to recover it. 00:38:07.131 [2024-05-15 20:29:59.399045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.131 [2024-05-15 20:29:59.399129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.131 [2024-05-15 20:29:59.399152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.131 [2024-05-15 20:29:59.399160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.131 [2024-05-15 20:29:59.399167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.131 [2024-05-15 20:29:59.399183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.131 qpair failed and we were unable to recover it. 00:38:07.131 [2024-05-15 20:29:59.409143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.131 [2024-05-15 20:29:59.409214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.131 [2024-05-15 20:29:59.409231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.131 [2024-05-15 20:29:59.409239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.131 [2024-05-15 20:29:59.409246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.131 [2024-05-15 20:29:59.409261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.131 qpair failed and we were unable to recover it. 00:38:07.131 [2024-05-15 20:29:59.419103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.131 [2024-05-15 20:29:59.419175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.131 [2024-05-15 20:29:59.419192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.131 [2024-05-15 20:29:59.419199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.131 [2024-05-15 20:29:59.419207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.131 [2024-05-15 20:29:59.419221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.131 qpair failed and we were unable to recover it. 00:38:07.131 [2024-05-15 20:29:59.429172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.131 [2024-05-15 20:29:59.429244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.131 [2024-05-15 20:29:59.429261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.131 [2024-05-15 20:29:59.429268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.131 [2024-05-15 20:29:59.429274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.131 [2024-05-15 20:29:59.429289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.131 qpair failed and we were unable to recover it. 00:38:07.131 [2024-05-15 20:29:59.439151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.131 [2024-05-15 20:29:59.439228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.131 [2024-05-15 20:29:59.439245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.131 [2024-05-15 20:29:59.439252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.131 [2024-05-15 20:29:59.439259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.131 [2024-05-15 20:29:59.439274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.131 qpair failed and we were unable to recover it. 00:38:07.131 [2024-05-15 20:29:59.449214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.131 [2024-05-15 20:29:59.449284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.131 [2024-05-15 20:29:59.449304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.131 [2024-05-15 20:29:59.449312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.131 [2024-05-15 20:29:59.449326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.131 [2024-05-15 20:29:59.449343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.131 qpair failed and we were unable to recover it. 00:38:07.131 [2024-05-15 20:29:59.459210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.131 [2024-05-15 20:29:59.459280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.131 [2024-05-15 20:29:59.459300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.131 [2024-05-15 20:29:59.459308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.131 [2024-05-15 20:29:59.459319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.131 [2024-05-15 20:29:59.459335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.131 qpair failed and we were unable to recover it. 00:38:07.131 [2024-05-15 20:29:59.469241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.131 [2024-05-15 20:29:59.469319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.131 [2024-05-15 20:29:59.469336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.131 [2024-05-15 20:29:59.469343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.131 [2024-05-15 20:29:59.469350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.131 [2024-05-15 20:29:59.469365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.131 qpair failed and we were unable to recover it. 00:38:07.131 [2024-05-15 20:29:59.479290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.131 [2024-05-15 20:29:59.479372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.131 [2024-05-15 20:29:59.479389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.131 [2024-05-15 20:29:59.479398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.131 [2024-05-15 20:29:59.479404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.131 [2024-05-15 20:29:59.479419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.131 qpair failed and we were unable to recover it. 00:38:07.131 [2024-05-15 20:29:59.489305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.131 [2024-05-15 20:29:59.489380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.131 [2024-05-15 20:29:59.489397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.131 [2024-05-15 20:29:59.489404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.131 [2024-05-15 20:29:59.489411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.131 [2024-05-15 20:29:59.489426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.131 qpair failed and we were unable to recover it. 00:38:07.131 [2024-05-15 20:29:59.499325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.131 [2024-05-15 20:29:59.499398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.131 [2024-05-15 20:29:59.499415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.131 [2024-05-15 20:29:59.499422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.131 [2024-05-15 20:29:59.499429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.131 [2024-05-15 20:29:59.499447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.131 qpair failed and we were unable to recover it. 00:38:07.131 [2024-05-15 20:29:59.509301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.131 [2024-05-15 20:29:59.509398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.131 [2024-05-15 20:29:59.509415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.131 [2024-05-15 20:29:59.509423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.131 [2024-05-15 20:29:59.509429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.131 [2024-05-15 20:29:59.509444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.131 qpair failed and we were unable to recover it. 00:38:07.131 [2024-05-15 20:29:59.519280] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.131 [2024-05-15 20:29:59.519366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.131 [2024-05-15 20:29:59.519383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.131 [2024-05-15 20:29:59.519392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.131 [2024-05-15 20:29:59.519398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.131 [2024-05-15 20:29:59.519413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.131 qpair failed and we were unable to recover it. 00:38:07.131 [2024-05-15 20:29:59.529330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.132 [2024-05-15 20:29:59.529411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.132 [2024-05-15 20:29:59.529428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.132 [2024-05-15 20:29:59.529436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.132 [2024-05-15 20:29:59.529443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.132 [2024-05-15 20:29:59.529458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.132 qpair failed and we were unable to recover it. 00:38:07.132 [2024-05-15 20:29:59.539430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.132 [2024-05-15 20:29:59.539502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.132 [2024-05-15 20:29:59.539519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.132 [2024-05-15 20:29:59.539526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.132 [2024-05-15 20:29:59.539533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.132 [2024-05-15 20:29:59.539549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.132 qpair failed and we were unable to recover it. 00:38:07.132 [2024-05-15 20:29:59.549464] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.132 [2024-05-15 20:29:59.549541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.132 [2024-05-15 20:29:59.549565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.132 [2024-05-15 20:29:59.549573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.132 [2024-05-15 20:29:59.549579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.132 [2024-05-15 20:29:59.549594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.132 qpair failed and we were unable to recover it. 00:38:07.132 [2024-05-15 20:29:59.559521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.132 [2024-05-15 20:29:59.559601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.132 [2024-05-15 20:29:59.559618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.132 [2024-05-15 20:29:59.559626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.132 [2024-05-15 20:29:59.559632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.132 [2024-05-15 20:29:59.559647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.132 qpair failed and we were unable to recover it. 00:38:07.132 [2024-05-15 20:29:59.569546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.132 [2024-05-15 20:29:59.569624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.132 [2024-05-15 20:29:59.569641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.132 [2024-05-15 20:29:59.569649] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.132 [2024-05-15 20:29:59.569655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.132 [2024-05-15 20:29:59.569669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.132 qpair failed and we were unable to recover it. 00:38:07.132 [2024-05-15 20:29:59.579538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.132 [2024-05-15 20:29:59.579606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.132 [2024-05-15 20:29:59.579623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.132 [2024-05-15 20:29:59.579631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.132 [2024-05-15 20:29:59.579637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.132 [2024-05-15 20:29:59.579652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.132 qpair failed and we were unable to recover it. 00:38:07.132 [2024-05-15 20:29:59.589611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.132 [2024-05-15 20:29:59.589687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.132 [2024-05-15 20:29:59.589704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.132 [2024-05-15 20:29:59.589711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.132 [2024-05-15 20:29:59.589718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.132 [2024-05-15 20:29:59.589736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.132 qpair failed and we were unable to recover it. 00:38:07.132 [2024-05-15 20:29:59.599643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.132 [2024-05-15 20:29:59.599736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.132 [2024-05-15 20:29:59.599754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.132 [2024-05-15 20:29:59.599761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.132 [2024-05-15 20:29:59.599768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.132 [2024-05-15 20:29:59.599783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.132 qpair failed and we were unable to recover it. 00:38:07.132 [2024-05-15 20:29:59.609620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.132 [2024-05-15 20:29:59.609696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.132 [2024-05-15 20:29:59.609716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.132 [2024-05-15 20:29:59.609724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.132 [2024-05-15 20:29:59.609731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.132 [2024-05-15 20:29:59.609747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.132 qpair failed and we were unable to recover it. 00:38:07.132 [2024-05-15 20:29:59.619673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.132 [2024-05-15 20:29:59.619744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.132 [2024-05-15 20:29:59.619762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.132 [2024-05-15 20:29:59.619772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.132 [2024-05-15 20:29:59.619779] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.132 [2024-05-15 20:29:59.619795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.132 qpair failed and we were unable to recover it. 00:38:07.132 [2024-05-15 20:29:59.629699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.132 [2024-05-15 20:29:59.629774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.132 [2024-05-15 20:29:59.629791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.132 [2024-05-15 20:29:59.629798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.132 [2024-05-15 20:29:59.629804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.132 [2024-05-15 20:29:59.629819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.132 qpair failed and we were unable to recover it. 00:38:07.396 [2024-05-15 20:29:59.639761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.396 [2024-05-15 20:29:59.639837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.396 [2024-05-15 20:29:59.639858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.396 [2024-05-15 20:29:59.639866] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.396 [2024-05-15 20:29:59.639872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.396 [2024-05-15 20:29:59.639887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.396 qpair failed and we were unable to recover it. 00:38:07.396 [2024-05-15 20:29:59.649765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.396 [2024-05-15 20:29:59.649834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.396 [2024-05-15 20:29:59.649851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.396 [2024-05-15 20:29:59.649859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.396 [2024-05-15 20:29:59.649865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.396 [2024-05-15 20:29:59.649879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.396 qpair failed and we were unable to recover it. 00:38:07.396 [2024-05-15 20:29:59.659804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.396 [2024-05-15 20:29:59.659875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.396 [2024-05-15 20:29:59.659891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.396 [2024-05-15 20:29:59.659898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.396 [2024-05-15 20:29:59.659904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.396 [2024-05-15 20:29:59.659919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.396 qpair failed and we were unable to recover it. 00:38:07.396 [2024-05-15 20:29:59.669829] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.396 [2024-05-15 20:29:59.669902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.396 [2024-05-15 20:29:59.669918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.396 [2024-05-15 20:29:59.669926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.396 [2024-05-15 20:29:59.669932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.396 [2024-05-15 20:29:59.669947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.396 qpair failed and we were unable to recover it. 00:38:07.396 [2024-05-15 20:29:59.679845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.396 [2024-05-15 20:29:59.679956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.396 [2024-05-15 20:29:59.679974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.396 [2024-05-15 20:29:59.679981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.396 [2024-05-15 20:29:59.679987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.396 [2024-05-15 20:29:59.680009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.396 qpair failed and we were unable to recover it. 00:38:07.396 [2024-05-15 20:29:59.689849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.396 [2024-05-15 20:29:59.689931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.396 [2024-05-15 20:29:59.689957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.396 [2024-05-15 20:29:59.689967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.396 [2024-05-15 20:29:59.689974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.396 [2024-05-15 20:29:59.689992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.396 qpair failed and we were unable to recover it. 00:38:07.396 [2024-05-15 20:29:59.699929] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.396 [2024-05-15 20:29:59.700019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.396 [2024-05-15 20:29:59.700045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.396 [2024-05-15 20:29:59.700054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.396 [2024-05-15 20:29:59.700061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.396 [2024-05-15 20:29:59.700079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.396 qpair failed and we were unable to recover it. 00:38:07.396 [2024-05-15 20:29:59.709946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.396 [2024-05-15 20:29:59.710025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.396 [2024-05-15 20:29:59.710051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.396 [2024-05-15 20:29:59.710061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.396 [2024-05-15 20:29:59.710068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.396 [2024-05-15 20:29:59.710086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.396 qpair failed and we were unable to recover it. 00:38:07.396 [2024-05-15 20:29:59.719842] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.396 [2024-05-15 20:29:59.719926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.396 [2024-05-15 20:29:59.719944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.396 [2024-05-15 20:29:59.719952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.396 [2024-05-15 20:29:59.719959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.396 [2024-05-15 20:29:59.719975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.396 qpair failed and we were unable to recover it. 00:38:07.396 [2024-05-15 20:29:59.729995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.396 [2024-05-15 20:29:59.730076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.396 [2024-05-15 20:29:59.730097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.396 [2024-05-15 20:29:59.730105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.396 [2024-05-15 20:29:59.730112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.396 [2024-05-15 20:29:59.730127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.396 qpair failed and we were unable to recover it. 00:38:07.396 [2024-05-15 20:29:59.739991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.397 [2024-05-15 20:29:59.740071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.397 [2024-05-15 20:29:59.740088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.397 [2024-05-15 20:29:59.740096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.397 [2024-05-15 20:29:59.740103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.397 [2024-05-15 20:29:59.740118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.397 qpair failed and we were unable to recover it. 00:38:07.397 [2024-05-15 20:29:59.750033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.397 [2024-05-15 20:29:59.750106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.397 [2024-05-15 20:29:59.750123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.397 [2024-05-15 20:29:59.750130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.397 [2024-05-15 20:29:59.750136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.397 [2024-05-15 20:29:59.750152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.397 qpair failed and we were unable to recover it. 00:38:07.397 [2024-05-15 20:29:59.760063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.397 [2024-05-15 20:29:59.760151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.397 [2024-05-15 20:29:59.760168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.397 [2024-05-15 20:29:59.760176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.397 [2024-05-15 20:29:59.760182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.397 [2024-05-15 20:29:59.760197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.397 qpair failed and we were unable to recover it. 00:38:07.397 [2024-05-15 20:29:59.770102] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.397 [2024-05-15 20:29:59.770183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.397 [2024-05-15 20:29:59.770200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.397 [2024-05-15 20:29:59.770207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.397 [2024-05-15 20:29:59.770217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.397 [2024-05-15 20:29:59.770232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.397 qpair failed and we were unable to recover it. 00:38:07.397 [2024-05-15 20:29:59.780091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.397 [2024-05-15 20:29:59.780160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.397 [2024-05-15 20:29:59.780177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.397 [2024-05-15 20:29:59.780184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.397 [2024-05-15 20:29:59.780191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.397 [2024-05-15 20:29:59.780206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.397 qpair failed and we were unable to recover it. 00:38:07.397 [2024-05-15 20:29:59.790149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.397 [2024-05-15 20:29:59.790224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.397 [2024-05-15 20:29:59.790241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.397 [2024-05-15 20:29:59.790248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.397 [2024-05-15 20:29:59.790255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.397 [2024-05-15 20:29:59.790270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.397 qpair failed and we were unable to recover it. 00:38:07.397 [2024-05-15 20:29:59.800077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.397 [2024-05-15 20:29:59.800152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.397 [2024-05-15 20:29:59.800168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.397 [2024-05-15 20:29:59.800176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.397 [2024-05-15 20:29:59.800182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.397 [2024-05-15 20:29:59.800196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.397 qpair failed and we were unable to recover it. 00:38:07.397 [2024-05-15 20:29:59.810147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.397 [2024-05-15 20:29:59.810239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.397 [2024-05-15 20:29:59.810256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.397 [2024-05-15 20:29:59.810263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.397 [2024-05-15 20:29:59.810269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.397 [2024-05-15 20:29:59.810284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.397 qpair failed and we were unable to recover it. 00:38:07.397 [2024-05-15 20:29:59.820225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.397 [2024-05-15 20:29:59.820304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.397 [2024-05-15 20:29:59.820326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.397 [2024-05-15 20:29:59.820333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.397 [2024-05-15 20:29:59.820340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.397 [2024-05-15 20:29:59.820355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.397 qpair failed and we were unable to recover it. 00:38:07.397 [2024-05-15 20:29:59.830256] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.397 [2024-05-15 20:29:59.830333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.397 [2024-05-15 20:29:59.830349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.397 [2024-05-15 20:29:59.830357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.397 [2024-05-15 20:29:59.830364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.397 [2024-05-15 20:29:59.830379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.397 qpair failed and we were unable to recover it. 00:38:07.397 [2024-05-15 20:29:59.840291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.397 [2024-05-15 20:29:59.840377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.397 [2024-05-15 20:29:59.840393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.397 [2024-05-15 20:29:59.840401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.397 [2024-05-15 20:29:59.840408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.397 [2024-05-15 20:29:59.840423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.397 qpair failed and we were unable to recover it. 00:38:07.397 [2024-05-15 20:29:59.850249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.397 [2024-05-15 20:29:59.850327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.397 [2024-05-15 20:29:59.850344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.397 [2024-05-15 20:29:59.850351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.397 [2024-05-15 20:29:59.850358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.397 [2024-05-15 20:29:59.850373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.397 qpair failed and we were unable to recover it. 00:38:07.397 [2024-05-15 20:29:59.860341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.397 [2024-05-15 20:29:59.860411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.397 [2024-05-15 20:29:59.860427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.397 [2024-05-15 20:29:59.860435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.397 [2024-05-15 20:29:59.860445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.397 [2024-05-15 20:29:59.860460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.397 qpair failed and we were unable to recover it. 00:38:07.397 [2024-05-15 20:29:59.870379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.397 [2024-05-15 20:29:59.870452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.397 [2024-05-15 20:29:59.870468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.397 [2024-05-15 20:29:59.870476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.397 [2024-05-15 20:29:59.870482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.398 [2024-05-15 20:29:59.870497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.398 qpair failed and we were unable to recover it. 00:38:07.398 [2024-05-15 20:29:59.880438] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.398 [2024-05-15 20:29:59.880523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.398 [2024-05-15 20:29:59.880540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.398 [2024-05-15 20:29:59.880548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.398 [2024-05-15 20:29:59.880555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.398 [2024-05-15 20:29:59.880570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.398 qpair failed and we were unable to recover it. 00:38:07.398 [2024-05-15 20:29:59.890413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.398 [2024-05-15 20:29:59.890487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.398 [2024-05-15 20:29:59.890504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.398 [2024-05-15 20:29:59.890511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.398 [2024-05-15 20:29:59.890517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.398 [2024-05-15 20:29:59.890532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.398 qpair failed and we were unable to recover it. 00:38:07.660 [2024-05-15 20:29:59.900345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.660 [2024-05-15 20:29:59.900424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.660 [2024-05-15 20:29:59.900440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.660 [2024-05-15 20:29:59.900448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.660 [2024-05-15 20:29:59.900454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.660 [2024-05-15 20:29:59.900468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.660 qpair failed and we were unable to recover it. 00:38:07.660 [2024-05-15 20:29:59.910514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.660 [2024-05-15 20:29:59.910596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.660 [2024-05-15 20:29:59.910612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.660 [2024-05-15 20:29:59.910619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.660 [2024-05-15 20:29:59.910626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.660 [2024-05-15 20:29:59.910640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.660 qpair failed and we were unable to recover it. 00:38:07.660 [2024-05-15 20:29:59.920488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.660 [2024-05-15 20:29:59.920568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.660 [2024-05-15 20:29:59.920584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.660 [2024-05-15 20:29:59.920592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.660 [2024-05-15 20:29:59.920599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.660 [2024-05-15 20:29:59.920613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.660 qpair failed and we were unable to recover it. 00:38:07.660 [2024-05-15 20:29:59.930534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.660 [2024-05-15 20:29:59.930607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.660 [2024-05-15 20:29:59.930623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.660 [2024-05-15 20:29:59.930630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.660 [2024-05-15 20:29:59.930637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.660 [2024-05-15 20:29:59.930652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.660 qpair failed and we were unable to recover it. 00:38:07.660 [2024-05-15 20:29:59.940569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.660 [2024-05-15 20:29:59.940641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.660 [2024-05-15 20:29:59.940657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.660 [2024-05-15 20:29:59.940664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.660 [2024-05-15 20:29:59.940671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.660 [2024-05-15 20:29:59.940686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.660 qpair failed and we were unable to recover it. 00:38:07.660 [2024-05-15 20:29:59.950594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.661 [2024-05-15 20:29:59.950698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.661 [2024-05-15 20:29:59.950715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.661 [2024-05-15 20:29:59.950722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.661 [2024-05-15 20:29:59.950732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.661 [2024-05-15 20:29:59.950747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.661 qpair failed and we were unable to recover it. 00:38:07.661 [2024-05-15 20:29:59.960637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.661 [2024-05-15 20:29:59.960714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.661 [2024-05-15 20:29:59.960730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.661 [2024-05-15 20:29:59.960737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.661 [2024-05-15 20:29:59.960745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.661 [2024-05-15 20:29:59.960759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.661 qpair failed and we were unable to recover it. 00:38:07.661 [2024-05-15 20:29:59.970542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.661 [2024-05-15 20:29:59.970622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.661 [2024-05-15 20:29:59.970639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.661 [2024-05-15 20:29:59.970647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.661 [2024-05-15 20:29:59.970653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.661 [2024-05-15 20:29:59.970668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.661 qpair failed and we were unable to recover it. 00:38:07.661 [2024-05-15 20:29:59.980688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.661 [2024-05-15 20:29:59.980756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.661 [2024-05-15 20:29:59.980772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.661 [2024-05-15 20:29:59.980780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.661 [2024-05-15 20:29:59.980786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.661 [2024-05-15 20:29:59.980802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.661 qpair failed and we were unable to recover it. 00:38:07.661 [2024-05-15 20:29:59.990683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.661 [2024-05-15 20:29:59.990755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.661 [2024-05-15 20:29:59.990771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.661 [2024-05-15 20:29:59.990779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.661 [2024-05-15 20:29:59.990785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.661 [2024-05-15 20:29:59.990800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.661 qpair failed and we were unable to recover it. 00:38:07.661 [2024-05-15 20:30:00.000739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.661 [2024-05-15 20:30:00.000814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.661 [2024-05-15 20:30:00.000831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.661 [2024-05-15 20:30:00.000839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.661 [2024-05-15 20:30:00.000845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.661 [2024-05-15 20:30:00.000860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.661 qpair failed and we were unable to recover it. 00:38:07.661 [2024-05-15 20:30:00.010779] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.661 [2024-05-15 20:30:00.010895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.661 [2024-05-15 20:30:00.010912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.661 [2024-05-15 20:30:00.010921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.661 [2024-05-15 20:30:00.010928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.661 [2024-05-15 20:30:00.010943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.661 qpair failed and we were unable to recover it. 00:38:07.661 [2024-05-15 20:30:00.020811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.661 [2024-05-15 20:30:00.020881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.661 [2024-05-15 20:30:00.020898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.661 [2024-05-15 20:30:00.020906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.661 [2024-05-15 20:30:00.020912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.661 [2024-05-15 20:30:00.020927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.661 qpair failed and we were unable to recover it. 00:38:07.661 [2024-05-15 20:30:00.030825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.661 [2024-05-15 20:30:00.030899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.661 [2024-05-15 20:30:00.030916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.661 [2024-05-15 20:30:00.030923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.661 [2024-05-15 20:30:00.030929] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.661 [2024-05-15 20:30:00.030945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.661 qpair failed and we were unable to recover it. 00:38:07.661 [2024-05-15 20:30:00.040854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.661 [2024-05-15 20:30:00.040934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.661 [2024-05-15 20:30:00.040951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.661 [2024-05-15 20:30:00.040964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.661 [2024-05-15 20:30:00.040971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.661 [2024-05-15 20:30:00.040985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.661 qpair failed and we were unable to recover it. 00:38:07.661 [2024-05-15 20:30:00.050883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.661 [2024-05-15 20:30:00.050962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.661 [2024-05-15 20:30:00.050979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.661 [2024-05-15 20:30:00.050987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.661 [2024-05-15 20:30:00.050994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.661 [2024-05-15 20:30:00.051008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.661 qpair failed and we were unable to recover it. 00:38:07.661 [2024-05-15 20:30:00.060904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.661 [2024-05-15 20:30:00.060982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.661 [2024-05-15 20:30:00.061007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.661 [2024-05-15 20:30:00.061017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.661 [2024-05-15 20:30:00.061024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.661 [2024-05-15 20:30:00.061043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.661 qpair failed and we were unable to recover it. 00:38:07.661 [2024-05-15 20:30:00.070947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.661 [2024-05-15 20:30:00.071027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.661 [2024-05-15 20:30:00.071054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.661 [2024-05-15 20:30:00.071063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.661 [2024-05-15 20:30:00.071070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.661 [2024-05-15 20:30:00.071089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.661 qpair failed and we were unable to recover it. 00:38:07.661 [2024-05-15 20:30:00.080961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.661 [2024-05-15 20:30:00.081045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.661 [2024-05-15 20:30:00.081070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.661 [2024-05-15 20:30:00.081080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.661 [2024-05-15 20:30:00.081086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.661 [2024-05-15 20:30:00.081105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.661 qpair failed and we were unable to recover it. 00:38:07.662 [2024-05-15 20:30:00.090975] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.662 [2024-05-15 20:30:00.091067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.662 [2024-05-15 20:30:00.091093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.662 [2024-05-15 20:30:00.091101] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.662 [2024-05-15 20:30:00.091108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.662 [2024-05-15 20:30:00.091127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.662 qpair failed and we were unable to recover it. 00:38:07.662 [2024-05-15 20:30:00.101009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.662 [2024-05-15 20:30:00.101094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.662 [2024-05-15 20:30:00.101120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.662 [2024-05-15 20:30:00.101129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.662 [2024-05-15 20:30:00.101135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.662 [2024-05-15 20:30:00.101155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.662 qpair failed and we were unable to recover it. 00:38:07.662 [2024-05-15 20:30:00.111053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.662 [2024-05-15 20:30:00.111165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.662 [2024-05-15 20:30:00.111183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.662 [2024-05-15 20:30:00.111191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.662 [2024-05-15 20:30:00.111197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.662 [2024-05-15 20:30:00.111212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.662 qpair failed and we were unable to recover it. 00:38:07.662 [2024-05-15 20:30:00.121092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.662 [2024-05-15 20:30:00.121175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.662 [2024-05-15 20:30:00.121192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.662 [2024-05-15 20:30:00.121200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.662 [2024-05-15 20:30:00.121208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.662 [2024-05-15 20:30:00.121223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.662 qpair failed and we were unable to recover it. 00:38:07.662 [2024-05-15 20:30:00.131123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.662 [2024-05-15 20:30:00.131196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.662 [2024-05-15 20:30:00.131213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.662 [2024-05-15 20:30:00.131225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.662 [2024-05-15 20:30:00.131232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.662 [2024-05-15 20:30:00.131248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.662 qpair failed and we were unable to recover it. 00:38:07.662 [2024-05-15 20:30:00.141209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.662 [2024-05-15 20:30:00.141286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.662 [2024-05-15 20:30:00.141303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.662 [2024-05-15 20:30:00.141310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.662 [2024-05-15 20:30:00.141323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.662 [2024-05-15 20:30:00.141339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.662 qpair failed and we were unable to recover it. 00:38:07.662 [2024-05-15 20:30:00.151165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.662 [2024-05-15 20:30:00.151276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.662 [2024-05-15 20:30:00.151293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.662 [2024-05-15 20:30:00.151301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.662 [2024-05-15 20:30:00.151307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.662 [2024-05-15 20:30:00.151327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.662 qpair failed and we were unable to recover it. 00:38:07.924 [2024-05-15 20:30:00.161252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.924 [2024-05-15 20:30:00.161338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.924 [2024-05-15 20:30:00.161355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.924 [2024-05-15 20:30:00.161362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.924 [2024-05-15 20:30:00.161368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.924 [2024-05-15 20:30:00.161384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.924 qpair failed and we were unable to recover it. 00:38:07.924 [2024-05-15 20:30:00.171209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.924 [2024-05-15 20:30:00.171286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.924 [2024-05-15 20:30:00.171302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.924 [2024-05-15 20:30:00.171310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.924 [2024-05-15 20:30:00.171322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.924 [2024-05-15 20:30:00.171337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.924 qpair failed and we were unable to recover it. 00:38:07.924 [2024-05-15 20:30:00.181255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.924 [2024-05-15 20:30:00.181339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.924 [2024-05-15 20:30:00.181356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.924 [2024-05-15 20:30:00.181364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.924 [2024-05-15 20:30:00.181370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.924 [2024-05-15 20:30:00.181387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.924 qpair failed and we were unable to recover it. 00:38:07.924 [2024-05-15 20:30:00.191358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.924 [2024-05-15 20:30:00.191470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.924 [2024-05-15 20:30:00.191486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.924 [2024-05-15 20:30:00.191493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.924 [2024-05-15 20:30:00.191501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.924 [2024-05-15 20:30:00.191516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.924 qpair failed and we were unable to recover it. 00:38:07.924 [2024-05-15 20:30:00.201203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.924 [2024-05-15 20:30:00.201282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.924 [2024-05-15 20:30:00.201299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.924 [2024-05-15 20:30:00.201306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.924 [2024-05-15 20:30:00.201319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.924 [2024-05-15 20:30:00.201334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.924 qpair failed and we were unable to recover it. 00:38:07.924 [2024-05-15 20:30:00.211416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.924 [2024-05-15 20:30:00.211535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.924 [2024-05-15 20:30:00.211551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.924 [2024-05-15 20:30:00.211559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.924 [2024-05-15 20:30:00.211566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.924 [2024-05-15 20:30:00.211581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.924 qpair failed and we were unable to recover it. 00:38:07.924 [2024-05-15 20:30:00.221278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.924 [2024-05-15 20:30:00.221354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.924 [2024-05-15 20:30:00.221371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.924 [2024-05-15 20:30:00.221383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.924 [2024-05-15 20:30:00.221389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.924 [2024-05-15 20:30:00.221404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.924 qpair failed and we were unable to recover it. 00:38:07.924 [2024-05-15 20:30:00.231402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.924 [2024-05-15 20:30:00.231504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.924 [2024-05-15 20:30:00.231521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.924 [2024-05-15 20:30:00.231529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.924 [2024-05-15 20:30:00.231535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.924 [2024-05-15 20:30:00.231550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.924 qpair failed and we were unable to recover it. 00:38:07.924 [2024-05-15 20:30:00.241432] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.924 [2024-05-15 20:30:00.241511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.924 [2024-05-15 20:30:00.241528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.924 [2024-05-15 20:30:00.241536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.924 [2024-05-15 20:30:00.241542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.924 [2024-05-15 20:30:00.241557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.924 qpair failed and we were unable to recover it. 00:38:07.924 [2024-05-15 20:30:00.251445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.924 [2024-05-15 20:30:00.251517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.924 [2024-05-15 20:30:00.251534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.925 [2024-05-15 20:30:00.251541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.925 [2024-05-15 20:30:00.251547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.925 [2024-05-15 20:30:00.251562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.925 qpair failed and we were unable to recover it. 00:38:07.925 [2024-05-15 20:30:00.261502] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.925 [2024-05-15 20:30:00.261573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.925 [2024-05-15 20:30:00.261590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.925 [2024-05-15 20:30:00.261597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.925 [2024-05-15 20:30:00.261603] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.925 [2024-05-15 20:30:00.261619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.925 qpair failed and we were unable to recover it. 00:38:07.925 [2024-05-15 20:30:00.271492] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.925 [2024-05-15 20:30:00.271569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.925 [2024-05-15 20:30:00.271586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.925 [2024-05-15 20:30:00.271593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.925 [2024-05-15 20:30:00.271600] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.925 [2024-05-15 20:30:00.271615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.925 qpair failed and we were unable to recover it. 00:38:07.925 [2024-05-15 20:30:00.281507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.925 [2024-05-15 20:30:00.281597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.925 [2024-05-15 20:30:00.281613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.925 [2024-05-15 20:30:00.281621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.925 [2024-05-15 20:30:00.281628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.925 [2024-05-15 20:30:00.281642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.925 qpair failed and we were unable to recover it. 00:38:07.925 [2024-05-15 20:30:00.291570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.925 [2024-05-15 20:30:00.291651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.925 [2024-05-15 20:30:00.291668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.925 [2024-05-15 20:30:00.291675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.925 [2024-05-15 20:30:00.291683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.925 [2024-05-15 20:30:00.291697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.925 qpair failed and we were unable to recover it. 00:38:07.925 [2024-05-15 20:30:00.301648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.925 [2024-05-15 20:30:00.301716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.925 [2024-05-15 20:30:00.301732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.925 [2024-05-15 20:30:00.301740] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.925 [2024-05-15 20:30:00.301747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.925 [2024-05-15 20:30:00.301762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.925 qpair failed and we were unable to recover it. 00:38:07.925 [2024-05-15 20:30:00.311632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.925 [2024-05-15 20:30:00.311707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.925 [2024-05-15 20:30:00.311726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.925 [2024-05-15 20:30:00.311734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.925 [2024-05-15 20:30:00.311742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.925 [2024-05-15 20:30:00.311757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.925 qpair failed and we were unable to recover it. 00:38:07.925 [2024-05-15 20:30:00.321636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.925 [2024-05-15 20:30:00.321721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.925 [2024-05-15 20:30:00.321737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.925 [2024-05-15 20:30:00.321746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.925 [2024-05-15 20:30:00.321753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.925 [2024-05-15 20:30:00.321767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.925 qpair failed and we were unable to recover it. 00:38:07.925 [2024-05-15 20:30:00.331732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.925 [2024-05-15 20:30:00.331847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.925 [2024-05-15 20:30:00.331864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.925 [2024-05-15 20:30:00.331871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.925 [2024-05-15 20:30:00.331878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.925 [2024-05-15 20:30:00.331893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.925 qpair failed and we were unable to recover it. 00:38:07.925 [2024-05-15 20:30:00.341698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.925 [2024-05-15 20:30:00.341770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.925 [2024-05-15 20:30:00.341787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.925 [2024-05-15 20:30:00.341794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.925 [2024-05-15 20:30:00.341801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.925 [2024-05-15 20:30:00.341816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.925 qpair failed and we were unable to recover it. 00:38:07.925 [2024-05-15 20:30:00.351802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.925 [2024-05-15 20:30:00.351890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.925 [2024-05-15 20:30:00.351907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.925 [2024-05-15 20:30:00.351915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.925 [2024-05-15 20:30:00.351921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.925 [2024-05-15 20:30:00.351937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.925 qpair failed and we were unable to recover it. 00:38:07.925 [2024-05-15 20:30:00.361808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.925 [2024-05-15 20:30:00.361885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.925 [2024-05-15 20:30:00.361902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.925 [2024-05-15 20:30:00.361909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.925 [2024-05-15 20:30:00.361916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.925 [2024-05-15 20:30:00.361932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.925 qpair failed and we were unable to recover it. 00:38:07.925 [2024-05-15 20:30:00.371823] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.925 [2024-05-15 20:30:00.371899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.925 [2024-05-15 20:30:00.371916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.925 [2024-05-15 20:30:00.371924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.925 [2024-05-15 20:30:00.371931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.925 [2024-05-15 20:30:00.371945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.925 qpair failed and we were unable to recover it. 00:38:07.925 [2024-05-15 20:30:00.381830] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.925 [2024-05-15 20:30:00.381910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.925 [2024-05-15 20:30:00.381926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.925 [2024-05-15 20:30:00.381934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.926 [2024-05-15 20:30:00.381941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.926 [2024-05-15 20:30:00.381955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.926 qpair failed and we were unable to recover it. 00:38:07.926 [2024-05-15 20:30:00.391861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.926 [2024-05-15 20:30:00.391937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.926 [2024-05-15 20:30:00.391954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.926 [2024-05-15 20:30:00.391961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.926 [2024-05-15 20:30:00.391968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.926 [2024-05-15 20:30:00.391983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.926 qpair failed and we were unable to recover it. 00:38:07.926 [2024-05-15 20:30:00.401883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.926 [2024-05-15 20:30:00.401958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.926 [2024-05-15 20:30:00.401979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.926 [2024-05-15 20:30:00.401987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.926 [2024-05-15 20:30:00.401994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.926 [2024-05-15 20:30:00.402008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.926 qpair failed and we were unable to recover it. 00:38:07.926 [2024-05-15 20:30:00.411933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.926 [2024-05-15 20:30:00.412004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.926 [2024-05-15 20:30:00.412021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.926 [2024-05-15 20:30:00.412028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.926 [2024-05-15 20:30:00.412036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.926 [2024-05-15 20:30:00.412051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.926 qpair failed and we were unable to recover it. 00:38:07.926 [2024-05-15 20:30:00.421945] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:07.926 [2024-05-15 20:30:00.422024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:07.926 [2024-05-15 20:30:00.422049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:07.926 [2024-05-15 20:30:00.422058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:07.926 [2024-05-15 20:30:00.422065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:07.926 [2024-05-15 20:30:00.422084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:07.926 qpair failed and we were unable to recover it. 00:38:08.188 [2024-05-15 20:30:00.431968] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.188 [2024-05-15 20:30:00.432054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.188 [2024-05-15 20:30:00.432079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.188 [2024-05-15 20:30:00.432089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.188 [2024-05-15 20:30:00.432097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.188 [2024-05-15 20:30:00.432115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.188 qpair failed and we were unable to recover it. 00:38:08.188 [2024-05-15 20:30:00.441993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.188 [2024-05-15 20:30:00.442086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.188 [2024-05-15 20:30:00.442112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.188 [2024-05-15 20:30:00.442121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.188 [2024-05-15 20:30:00.442128] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.188 [2024-05-15 20:30:00.442156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.188 qpair failed and we were unable to recover it. 00:38:08.188 [2024-05-15 20:30:00.451929] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.188 [2024-05-15 20:30:00.452010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.188 [2024-05-15 20:30:00.452036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.188 [2024-05-15 20:30:00.452045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.188 [2024-05-15 20:30:00.452052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.188 [2024-05-15 20:30:00.452070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.188 qpair failed and we were unable to recover it. 00:38:08.188 [2024-05-15 20:30:00.461962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.188 [2024-05-15 20:30:00.462039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.188 [2024-05-15 20:30:00.462057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.188 [2024-05-15 20:30:00.462066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.188 [2024-05-15 20:30:00.462072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.188 [2024-05-15 20:30:00.462089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.188 qpair failed and we were unable to recover it. 00:38:08.188 [2024-05-15 20:30:00.472090] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.188 [2024-05-15 20:30:00.472170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.188 [2024-05-15 20:30:00.472196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.188 [2024-05-15 20:30:00.472205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.188 [2024-05-15 20:30:00.472212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.188 [2024-05-15 20:30:00.472231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.188 qpair failed and we were unable to recover it. 00:38:08.188 [2024-05-15 20:30:00.482106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.189 [2024-05-15 20:30:00.482189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.189 [2024-05-15 20:30:00.482207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.189 [2024-05-15 20:30:00.482215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.189 [2024-05-15 20:30:00.482222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.189 [2024-05-15 20:30:00.482237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.189 qpair failed and we were unable to recover it. 00:38:08.189 [2024-05-15 20:30:00.492141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.189 [2024-05-15 20:30:00.492215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.189 [2024-05-15 20:30:00.492237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.189 [2024-05-15 20:30:00.492246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.189 [2024-05-15 20:30:00.492252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.189 [2024-05-15 20:30:00.492267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.189 qpair failed and we were unable to recover it. 00:38:08.189 [2024-05-15 20:30:00.502234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.189 [2024-05-15 20:30:00.502375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.189 [2024-05-15 20:30:00.502393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.189 [2024-05-15 20:30:00.502400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.189 [2024-05-15 20:30:00.502407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.189 [2024-05-15 20:30:00.502422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.189 qpair failed and we were unable to recover it. 00:38:08.189 [2024-05-15 20:30:00.512245] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.189 [2024-05-15 20:30:00.512320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.189 [2024-05-15 20:30:00.512338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.189 [2024-05-15 20:30:00.512345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.189 [2024-05-15 20:30:00.512352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.189 [2024-05-15 20:30:00.512367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.189 qpair failed and we were unable to recover it. 00:38:08.189 [2024-05-15 20:30:00.522230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.189 [2024-05-15 20:30:00.522310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.189 [2024-05-15 20:30:00.522336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.189 [2024-05-15 20:30:00.522344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.189 [2024-05-15 20:30:00.522350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.189 [2024-05-15 20:30:00.522366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.189 qpair failed and we were unable to recover it. 00:38:08.189 [2024-05-15 20:30:00.532275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.189 [2024-05-15 20:30:00.532352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.189 [2024-05-15 20:30:00.532370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.189 [2024-05-15 20:30:00.532379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.189 [2024-05-15 20:30:00.532386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.189 [2024-05-15 20:30:00.532406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.189 qpair failed and we were unable to recover it. 00:38:08.189 [2024-05-15 20:30:00.542310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.189 [2024-05-15 20:30:00.542423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.189 [2024-05-15 20:30:00.542440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.189 [2024-05-15 20:30:00.542448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.189 [2024-05-15 20:30:00.542454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.189 [2024-05-15 20:30:00.542469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.189 qpair failed and we were unable to recover it. 00:38:08.189 [2024-05-15 20:30:00.552383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.189 [2024-05-15 20:30:00.552500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.189 [2024-05-15 20:30:00.552517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.189 [2024-05-15 20:30:00.552526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.189 [2024-05-15 20:30:00.552532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.189 [2024-05-15 20:30:00.552549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.189 qpair failed and we were unable to recover it. 00:38:08.189 [2024-05-15 20:30:00.562352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.189 [2024-05-15 20:30:00.562430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.189 [2024-05-15 20:30:00.562446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.189 [2024-05-15 20:30:00.562454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.189 [2024-05-15 20:30:00.562460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.189 [2024-05-15 20:30:00.562476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.189 qpair failed and we were unable to recover it. 00:38:08.189 [2024-05-15 20:30:00.572392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.189 [2024-05-15 20:30:00.572464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.189 [2024-05-15 20:30:00.572481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.189 [2024-05-15 20:30:00.572489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.189 [2024-05-15 20:30:00.572495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.189 [2024-05-15 20:30:00.572510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.189 qpair failed and we were unable to recover it. 00:38:08.189 [2024-05-15 20:30:00.582451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.189 [2024-05-15 20:30:00.582523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.189 [2024-05-15 20:30:00.582543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.189 [2024-05-15 20:30:00.582551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.189 [2024-05-15 20:30:00.582557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.189 [2024-05-15 20:30:00.582572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.189 qpair failed and we were unable to recover it. 00:38:08.189 [2024-05-15 20:30:00.592351] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.189 [2024-05-15 20:30:00.592426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.189 [2024-05-15 20:30:00.592443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.189 [2024-05-15 20:30:00.592451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.189 [2024-05-15 20:30:00.592457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.189 [2024-05-15 20:30:00.592471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.189 qpair failed and we were unable to recover it. 00:38:08.189 [2024-05-15 20:30:00.602394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.189 [2024-05-15 20:30:00.602473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.189 [2024-05-15 20:30:00.602490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.189 [2024-05-15 20:30:00.602497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.189 [2024-05-15 20:30:00.602504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.189 [2024-05-15 20:30:00.602518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.189 qpair failed and we were unable to recover it. 00:38:08.189 [2024-05-15 20:30:00.612541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.189 [2024-05-15 20:30:00.612621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.189 [2024-05-15 20:30:00.612639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.189 [2024-05-15 20:30:00.612647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.189 [2024-05-15 20:30:00.612653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.189 [2024-05-15 20:30:00.612669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.189 qpair failed and we were unable to recover it. 00:38:08.189 [2024-05-15 20:30:00.622554] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.190 [2024-05-15 20:30:00.622629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.190 [2024-05-15 20:30:00.622646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.190 [2024-05-15 20:30:00.622653] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.190 [2024-05-15 20:30:00.622661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.190 [2024-05-15 20:30:00.622679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.190 qpair failed and we were unable to recover it. 00:38:08.190 [2024-05-15 20:30:00.632462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.190 [2024-05-15 20:30:00.632539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.190 [2024-05-15 20:30:00.632556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.190 [2024-05-15 20:30:00.632564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.190 [2024-05-15 20:30:00.632571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.190 [2024-05-15 20:30:00.632586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.190 qpair failed and we were unable to recover it. 00:38:08.190 [2024-05-15 20:30:00.642505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.190 [2024-05-15 20:30:00.642586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.190 [2024-05-15 20:30:00.642603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.190 [2024-05-15 20:30:00.642611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.190 [2024-05-15 20:30:00.642618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.190 [2024-05-15 20:30:00.642634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.190 qpair failed and we were unable to recover it. 00:38:08.190 [2024-05-15 20:30:00.652607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.190 [2024-05-15 20:30:00.652678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.190 [2024-05-15 20:30:00.652695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.190 [2024-05-15 20:30:00.652702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.190 [2024-05-15 20:30:00.652709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.190 [2024-05-15 20:30:00.652724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.190 qpair failed and we were unable to recover it. 00:38:08.190 [2024-05-15 20:30:00.662594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.190 [2024-05-15 20:30:00.662664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.190 [2024-05-15 20:30:00.662681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.190 [2024-05-15 20:30:00.662688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.190 [2024-05-15 20:30:00.662694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.190 [2024-05-15 20:30:00.662709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.190 qpair failed and we were unable to recover it. 00:38:08.190 [2024-05-15 20:30:00.672663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.190 [2024-05-15 20:30:00.672741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.190 [2024-05-15 20:30:00.672761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.190 [2024-05-15 20:30:00.672769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.190 [2024-05-15 20:30:00.672776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.190 [2024-05-15 20:30:00.672790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.190 qpair failed and we were unable to recover it. 00:38:08.190 [2024-05-15 20:30:00.682714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.190 [2024-05-15 20:30:00.682793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.190 [2024-05-15 20:30:00.682809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.190 [2024-05-15 20:30:00.682817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.190 [2024-05-15 20:30:00.682824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.190 [2024-05-15 20:30:00.682839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.190 qpair failed and we were unable to recover it. 00:38:08.452 [2024-05-15 20:30:00.692749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.452 [2024-05-15 20:30:00.692821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.452 [2024-05-15 20:30:00.692838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.452 [2024-05-15 20:30:00.692845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.452 [2024-05-15 20:30:00.692852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.452 [2024-05-15 20:30:00.692867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.452 qpair failed and we were unable to recover it. 00:38:08.452 [2024-05-15 20:30:00.702739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.452 [2024-05-15 20:30:00.702819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.452 [2024-05-15 20:30:00.702836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.452 [2024-05-15 20:30:00.702844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.452 [2024-05-15 20:30:00.702851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.452 [2024-05-15 20:30:00.702866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.452 qpair failed and we were unable to recover it. 00:38:08.452 [2024-05-15 20:30:00.712766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.452 [2024-05-15 20:30:00.712841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.452 [2024-05-15 20:30:00.712857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.452 [2024-05-15 20:30:00.712865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.452 [2024-05-15 20:30:00.712876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.452 [2024-05-15 20:30:00.712891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.452 qpair failed and we were unable to recover it. 00:38:08.452 [2024-05-15 20:30:00.722708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.452 [2024-05-15 20:30:00.722784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.452 [2024-05-15 20:30:00.722801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.452 [2024-05-15 20:30:00.722809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.452 [2024-05-15 20:30:00.722815] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.452 [2024-05-15 20:30:00.722831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.452 qpair failed and we were unable to recover it. 00:38:08.452 [2024-05-15 20:30:00.732790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.452 [2024-05-15 20:30:00.732871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.452 [2024-05-15 20:30:00.732888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.452 [2024-05-15 20:30:00.732896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.452 [2024-05-15 20:30:00.732903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.452 [2024-05-15 20:30:00.732917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.452 qpair failed and we were unable to recover it. 00:38:08.452 [2024-05-15 20:30:00.742860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.452 [2024-05-15 20:30:00.742936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.453 [2024-05-15 20:30:00.742953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.453 [2024-05-15 20:30:00.742960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.453 [2024-05-15 20:30:00.742967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.453 [2024-05-15 20:30:00.742981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.453 qpair failed and we were unable to recover it. 00:38:08.453 [2024-05-15 20:30:00.752916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.453 [2024-05-15 20:30:00.752993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.453 [2024-05-15 20:30:00.753010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.453 [2024-05-15 20:30:00.753017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.453 [2024-05-15 20:30:00.753024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf8d520 00:38:08.453 [2024-05-15 20:30:00.753039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:08.453 qpair failed and we were unable to recover it. 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 [2024-05-15 20:30:00.753432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:08.453 [2024-05-15 20:30:00.762914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.453 [2024-05-15 20:30:00.762997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.453 [2024-05-15 20:30:00.763018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.453 [2024-05-15 20:30:00.763027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.453 [2024-05-15 20:30:00.763034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:38:08.453 [2024-05-15 20:30:00.763052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:08.453 qpair failed and we were unable to recover it. 00:38:08.453 [2024-05-15 20:30:00.772932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.453 [2024-05-15 20:30:00.773006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.453 [2024-05-15 20:30:00.773023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.453 [2024-05-15 20:30:00.773031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.453 [2024-05-15 20:30:00.773038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:38:08.453 [2024-05-15 20:30:00.773054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:08.453 qpair failed and we were unable to recover it. 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Write completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.453 starting I/O failed 00:38:08.453 [2024-05-15 20:30:00.773924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:08.453 [2024-05-15 20:30:00.782972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.453 [2024-05-15 20:30:00.783091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.453 [2024-05-15 20:30:00.783140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.453 [2024-05-15 20:30:00.783163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.453 [2024-05-15 20:30:00.783182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69b0000b90 00:38:08.453 [2024-05-15 20:30:00.783227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:08.453 qpair failed and we were unable to recover it. 00:38:08.453 [2024-05-15 20:30:00.793031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.453 [2024-05-15 20:30:00.793157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.453 [2024-05-15 20:30:00.793190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.453 [2024-05-15 20:30:00.793205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.453 [2024-05-15 20:30:00.793218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69b0000b90 00:38:08.453 [2024-05-15 20:30:00.793249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:08.453 qpair failed and we were unable to recover it. 00:38:08.453 [2024-05-15 20:30:00.793604] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9b0f0 is same with the state(5) to be set 00:38:08.453 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Write completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Write completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Write completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Write completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 Read completed with error (sct=0, sc=8) 00:38:08.454 starting I/O failed 00:38:08.454 [2024-05-15 20:30:00.794000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:08.454 [2024-05-15 20:30:00.803005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.454 [2024-05-15 20:30:00.803073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.454 [2024-05-15 20:30:00.803088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.454 [2024-05-15 20:30:00.803094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.454 [2024-05-15 20:30:00.803098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69b8000b90 00:38:08.454 [2024-05-15 20:30:00.803111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:08.454 qpair failed and we were unable to recover it. 00:38:08.454 [2024-05-15 20:30:00.813031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:08.454 [2024-05-15 20:30:00.813091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:08.454 [2024-05-15 20:30:00.813104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:08.454 [2024-05-15 20:30:00.813109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:08.454 [2024-05-15 20:30:00.813114] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69b8000b90 00:38:08.454 [2024-05-15 20:30:00.813125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:08.454 qpair failed and we were unable to recover it. 00:38:08.454 [2024-05-15 20:30:00.813645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9b0f0 (9): Bad file descriptor 00:38:08.454 Initializing NVMe Controllers 00:38:08.454 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:08.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:08.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:08.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:08.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:08.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:08.454 Initialization complete. Launching workers. 00:38:08.454 Starting thread on core 1 00:38:08.454 Starting thread on core 2 00:38:08.454 Starting thread on core 3 00:38:08.454 Starting thread on core 0 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:08.454 00:38:08.454 real 0m11.466s 00:38:08.454 user 0m21.456s 00:38:08.454 sys 0m4.152s 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:08.454 ************************************ 00:38:08.454 END TEST nvmf_target_disconnect_tc2 00:38:08.454 ************************************ 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:08.454 rmmod nvme_tcp 00:38:08.454 rmmod nvme_fabrics 00:38:08.454 rmmod nvme_keyring 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 326200 ']' 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 326200 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 326200 ']' 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 326200 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:08.454 20:30:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 326200 00:38:08.715 20:30:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:38:08.715 20:30:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:38:08.715 20:30:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 326200' 00:38:08.715 killing process with pid 326200 00:38:08.715 20:30:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 326200 00:38:08.715 [2024-05-15 20:30:00.992148] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:38:08.715 20:30:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 326200 00:38:08.715 20:30:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:08.715 20:30:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:08.715 20:30:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:08.715 20:30:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:08.715 20:30:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:08.715 20:30:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:08.715 20:30:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:08.715 20:30:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:11.262 20:30:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:11.262 00:38:11.262 real 0m22.534s 00:38:11.262 user 0m49.240s 00:38:11.262 sys 0m10.790s 00:38:11.262 20:30:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:11.262 20:30:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:11.262 ************************************ 00:38:11.262 END TEST nvmf_target_disconnect 00:38:11.262 ************************************ 00:38:11.262 20:30:03 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:38:11.262 20:30:03 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:11.262 20:30:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:11.262 20:30:03 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:38:11.262 00:38:11.263 real 30m33.582s 00:38:11.263 user 75m11.964s 00:38:11.263 sys 8m34.786s 00:38:11.263 20:30:03 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:11.263 20:30:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:11.263 ************************************ 00:38:11.263 END TEST nvmf_tcp 00:38:11.263 ************************************ 00:38:11.263 20:30:03 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:38:11.263 20:30:03 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:11.263 20:30:03 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:38:11.263 20:30:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:11.263 20:30:03 -- common/autotest_common.sh@10 -- # set +x 00:38:11.263 ************************************ 00:38:11.263 START TEST spdkcli_nvmf_tcp 00:38:11.263 ************************************ 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:11.263 * Looking for test storage... 00:38:11.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=328126 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 328126 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 328126 ']' 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:11.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:11.263 20:30:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:11.263 [2024-05-15 20:30:03.594963] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:38:11.263 [2024-05-15 20:30:03.595034] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid328126 ] 00:38:11.263 EAL: No free 2048 kB hugepages reported on node 1 00:38:11.263 [2024-05-15 20:30:03.680963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:11.263 [2024-05-15 20:30:03.759245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:11.263 [2024-05-15 20:30:03.759252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.206 20:30:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:12.206 20:30:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:38:12.206 20:30:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:38:12.206 20:30:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:12.206 20:30:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:12.206 20:30:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:38:12.206 20:30:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:38:12.206 20:30:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:38:12.206 20:30:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:12.206 20:30:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:12.206 20:30:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:38:12.206 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:38:12.206 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:38:12.206 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:38:12.206 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:38:12.206 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:38:12.206 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:38:12.206 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:12.206 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:12.206 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:38:12.206 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:38:12.206 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:38:12.206 ' 00:38:14.751 [2024-05-15 20:30:06.873120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:15.695 [2024-05-15 20:30:08.040520] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:38:15.695 [2024-05-15 20:30:08.041023] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:38:18.239 [2024-05-15 20:30:10.179463] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:38:19.622 [2024-05-15 20:30:12.012747] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:38:21.007 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:38:21.007 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:38:21.007 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:38:21.007 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:38:21.007 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:38:21.007 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:38:21.007 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:38:21.007 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:21.007 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:21.007 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:38:21.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:38:21.007 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:38:21.267 20:30:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:38:21.267 20:30:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:21.267 20:30:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:21.267 20:30:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:38:21.267 20:30:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:21.267 20:30:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:21.267 20:30:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:38:21.267 20:30:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:38:21.528 20:30:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:38:21.528 20:30:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:38:21.528 20:30:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:38:21.528 20:30:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:21.528 20:30:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:21.788 20:30:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:38:21.788 20:30:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:21.788 20:30:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:21.788 20:30:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:38:21.788 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:38:21.788 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:21.788 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:38:21.788 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:38:21.788 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:38:21.788 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:38:21.788 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:21.788 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:38:21.788 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:38:21.788 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:38:21.788 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:38:21.788 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:38:21.788 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:38:21.788 ' 00:38:27.074 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:27.074 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:27.074 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:27.074 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:27.074 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:38:27.074 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:38:27.074 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:27.074 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:27.074 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:27.074 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:27.074 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:27.074 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:27.074 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:27.074 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:27.074 20:30:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:27.074 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:27.074 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:27.074 20:30:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 328126 00:38:27.074 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 328126 ']' 00:38:27.074 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 328126 00:38:27.074 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:38:27.074 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:27.074 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 328126 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 328126' 00:38:27.335 killing process with pid 328126 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 328126 00:38:27.335 [2024-05-15 20:30:19.587944] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 328126 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 328126 ']' 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 328126 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 328126 ']' 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 328126 00:38:27.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (328126) - No such process 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 328126 is not found' 00:38:27.335 Process with pid 328126 is not found 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:38:27.335 00:38:27.335 real 0m16.324s 00:38:27.335 user 0m34.427s 00:38:27.335 sys 0m0.827s 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:27.335 20:30:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:27.335 ************************************ 00:38:27.335 END TEST spdkcli_nvmf_tcp 00:38:27.335 ************************************ 00:38:27.335 20:30:19 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:27.335 20:30:19 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:38:27.335 20:30:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:27.335 20:30:19 -- common/autotest_common.sh@10 -- # set +x 00:38:27.335 ************************************ 00:38:27.335 START TEST nvmf_identify_passthru 00:38:27.336 ************************************ 00:38:27.336 20:30:19 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:27.597 * Looking for test storage... 00:38:27.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:27.598 20:30:19 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:27.598 20:30:19 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:27.598 20:30:19 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:27.598 20:30:19 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:27.598 20:30:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.598 20:30:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.598 20:30:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.598 20:30:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:27.598 20:30:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:27.598 20:30:19 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:27.598 20:30:19 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:27.598 20:30:19 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:27.598 20:30:19 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:27.598 20:30:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.598 20:30:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.598 20:30:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.598 20:30:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:27.598 20:30:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.598 20:30:19 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.598 20:30:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:27.598 20:30:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:27.598 20:30:19 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:38:27.598 20:30:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:35.762 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:35.762 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:38:35.762 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:35.762 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:35.762 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:35.763 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:35.763 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:35.763 Found net devices under 0000:31:00.0: cvl_0_0 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:35.763 Found net devices under 0000:31:00.1: cvl_0_1 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:35.763 20:30:27 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:35.763 20:30:28 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:35.763 20:30:28 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:35.763 20:30:28 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:35.763 20:30:28 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:35.763 20:30:28 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:35.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:35.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:38:35.763 00:38:35.763 --- 10.0.0.2 ping statistics --- 00:38:35.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:35.763 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:38:35.763 20:30:28 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:35.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:35.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:38:35.763 00:38:35.763 --- 10.0.0.1 ping statistics --- 00:38:35.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:35.763 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:38:35.763 20:30:28 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:35.763 20:30:28 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:38:35.763 20:30:28 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:35.763 20:30:28 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:35.763 20:30:28 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:35.763 20:30:28 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:35.763 20:30:28 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:35.763 20:30:28 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:35.763 20:30:28 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:35.763 20:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:38:35.763 20:30:28 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:35.763 20:30:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:35.763 20:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:38:35.763 20:30:28 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:38:35.763 20:30:28 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:38:35.763 20:30:28 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:38:35.763 20:30:28 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:38:35.763 20:30:28 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:38:35.763 20:30:28 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:38:35.763 20:30:28 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:35.763 20:30:28 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:35.764 20:30:28 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:38:36.025 20:30:28 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:38:36.025 20:30:28 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:65:00.0 00:38:36.025 20:30:28 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:65:00.0 00:38:36.025 20:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:38:36.025 20:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:38:36.025 20:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:38:36.025 20:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:38:36.025 20:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:38:36.025 EAL: No free 2048 kB hugepages reported on node 1 00:38:36.285 20:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:38:36.285 20:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:38:36.285 20:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:38:36.285 20:30:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:38:36.546 EAL: No free 2048 kB hugepages reported on node 1 00:38:36.807 20:30:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:38:36.807 20:30:29 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:38:36.807 20:30:29 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:36.807 20:30:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:36.807 20:30:29 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:38:36.807 20:30:29 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:36.807 20:30:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:36.807 20:30:29 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=336100 00:38:36.807 20:30:29 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:36.807 20:30:29 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:38:36.807 20:30:29 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 336100 00:38:36.807 20:30:29 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 336100 ']' 00:38:36.807 20:30:29 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:36.807 20:30:29 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:36.807 20:30:29 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:36.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:36.807 20:30:29 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:36.807 20:30:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:37.067 [2024-05-15 20:30:29.341103] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:38:37.068 [2024-05-15 20:30:29.341153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:37.068 EAL: No free 2048 kB hugepages reported on node 1 00:38:37.068 [2024-05-15 20:30:29.430590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:37.068 [2024-05-15 20:30:29.497852] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:37.068 [2024-05-15 20:30:29.497889] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:37.068 [2024-05-15 20:30:29.497897] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:37.068 [2024-05-15 20:30:29.497904] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:37.068 [2024-05-15 20:30:29.497910] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:37.068 [2024-05-15 20:30:29.498035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:37.068 [2024-05-15 20:30:29.498153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:37.068 [2024-05-15 20:30:29.498299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.068 [2024-05-15 20:30:29.498300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:38:38.009 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:38.009 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:38:38.009 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:38:38.009 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.009 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:38.009 INFO: Log level set to 20 00:38:38.009 INFO: Requests: 00:38:38.009 { 00:38:38.009 "jsonrpc": "2.0", 00:38:38.009 "method": "nvmf_set_config", 00:38:38.009 "id": 1, 00:38:38.009 "params": { 00:38:38.009 "admin_cmd_passthru": { 00:38:38.009 "identify_ctrlr": true 00:38:38.009 } 00:38:38.009 } 00:38:38.009 } 00:38:38.009 00:38:38.009 INFO: response: 00:38:38.009 { 00:38:38.009 "jsonrpc": "2.0", 00:38:38.009 "id": 1, 00:38:38.009 "result": true 00:38:38.009 } 00:38:38.009 00:38:38.009 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.009 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:38:38.009 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.009 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:38.009 INFO: Setting log level to 20 00:38:38.009 INFO: Setting log level to 20 00:38:38.009 INFO: Log level set to 20 00:38:38.009 INFO: Log level set to 20 00:38:38.009 INFO: Requests: 00:38:38.009 { 00:38:38.009 "jsonrpc": "2.0", 00:38:38.009 "method": "framework_start_init", 00:38:38.009 "id": 1 00:38:38.009 } 00:38:38.009 00:38:38.009 INFO: Requests: 00:38:38.009 { 00:38:38.009 "jsonrpc": "2.0", 00:38:38.009 "method": "framework_start_init", 00:38:38.009 "id": 1 00:38:38.009 } 00:38:38.009 00:38:38.009 [2024-05-15 20:30:30.291055] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:38:38.009 INFO: response: 00:38:38.009 { 00:38:38.009 "jsonrpc": "2.0", 00:38:38.009 "id": 1, 00:38:38.009 "result": true 00:38:38.009 } 00:38:38.009 00:38:38.009 INFO: response: 00:38:38.009 { 00:38:38.009 "jsonrpc": "2.0", 00:38:38.009 "id": 1, 00:38:38.009 "result": true 00:38:38.009 } 00:38:38.009 00:38:38.009 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.009 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:38.009 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.009 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:38.009 INFO: Setting log level to 40 00:38:38.009 INFO: Setting log level to 40 00:38:38.009 INFO: Setting log level to 40 00:38:38.009 [2024-05-15 20:30:30.304299] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:38.009 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.009 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:38:38.009 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:38.009 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:38.009 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:38:38.009 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.009 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:38.270 Nvme0n1 00:38:38.270 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.270 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:38.270 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.270 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:38.270 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.270 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:38.270 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.270 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:38.270 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.270 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:38.270 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.270 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:38.270 [2024-05-15 20:30:30.694120] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:38:38.270 [2024-05-15 20:30:30.694369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:38.270 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.270 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:38.270 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.270 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:38.270 [ 00:38:38.270 { 00:38:38.270 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:38.270 "subtype": "Discovery", 00:38:38.270 "listen_addresses": [], 00:38:38.270 "allow_any_host": true, 00:38:38.270 "hosts": [] 00:38:38.270 }, 00:38:38.270 { 00:38:38.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:38.270 "subtype": "NVMe", 00:38:38.270 "listen_addresses": [ 00:38:38.270 { 00:38:38.270 "trtype": "TCP", 00:38:38.270 "adrfam": "IPv4", 00:38:38.270 "traddr": "10.0.0.2", 00:38:38.270 "trsvcid": "4420" 00:38:38.270 } 00:38:38.270 ], 00:38:38.270 "allow_any_host": true, 00:38:38.270 "hosts": [], 00:38:38.270 "serial_number": "SPDK00000000000001", 00:38:38.270 "model_number": "SPDK bdev Controller", 00:38:38.270 "max_namespaces": 1, 00:38:38.270 "min_cntlid": 1, 00:38:38.270 "max_cntlid": 65519, 00:38:38.270 "namespaces": [ 00:38:38.270 { 00:38:38.270 "nsid": 1, 00:38:38.270 "bdev_name": "Nvme0n1", 00:38:38.270 "name": "Nvme0n1", 00:38:38.270 "nguid": "36344730526054940025384500000023", 00:38:38.270 "uuid": "36344730-5260-5494-0025-384500000023" 00:38:38.270 } 00:38:38.270 ] 00:38:38.270 } 00:38:38.270 ] 00:38:38.270 20:30:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.270 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:38.270 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:38.270 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:38.270 EAL: No free 2048 kB hugepages reported on node 1 00:38:38.530 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:38:38.530 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:38.530 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:38.530 20:30:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:38.530 EAL: No free 2048 kB hugepages reported on node 1 00:38:38.791 20:30:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:38:38.791 20:30:31 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:38:38.791 20:30:31 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:38:38.791 20:30:31 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:38.791 20:30:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.791 20:30:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:38.791 20:30:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.791 20:30:31 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:38.791 20:30:31 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:38.791 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:38.791 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:38:38.791 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:38.791 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:38:38.791 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:38.791 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:38.791 rmmod nvme_tcp 00:38:38.791 rmmod nvme_fabrics 00:38:38.791 rmmod nvme_keyring 00:38:38.791 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:38.791 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:38:38.791 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:38:38.791 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 336100 ']' 00:38:38.791 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 336100 00:38:38.791 20:30:31 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 336100 ']' 00:38:38.791 20:30:31 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 336100 00:38:38.791 20:30:31 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:38:38.791 20:30:31 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:38.791 20:30:31 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 336100 00:38:38.791 20:30:31 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:38:38.791 20:30:31 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:38:38.791 20:30:31 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 336100' 00:38:38.791 killing process with pid 336100 00:38:38.791 20:30:31 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 336100 00:38:38.791 [2024-05-15 20:30:31.262769] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:38:38.791 20:30:31 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 336100 00:38:39.052 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:39.052 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:39.052 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:39.052 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:39.052 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:39.052 20:30:31 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:39.052 20:30:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:39.052 20:30:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.666 20:30:33 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:41.666 00:38:41.666 real 0m13.799s 00:38:41.666 user 0m10.821s 00:38:41.666 sys 0m6.842s 00:38:41.666 20:30:33 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:41.666 20:30:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:41.666 ************************************ 00:38:41.666 END TEST nvmf_identify_passthru 00:38:41.666 ************************************ 00:38:41.666 20:30:33 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:41.666 20:30:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:41.666 20:30:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:41.666 20:30:33 -- common/autotest_common.sh@10 -- # set +x 00:38:41.666 ************************************ 00:38:41.666 START TEST nvmf_dif 00:38:41.666 ************************************ 00:38:41.666 20:30:33 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:41.666 * Looking for test storage... 00:38:41.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:41.666 20:30:33 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:41.667 20:30:33 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:41.667 20:30:33 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:41.667 20:30:33 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:41.667 20:30:33 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.667 20:30:33 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.667 20:30:33 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.667 20:30:33 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:41.667 20:30:33 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:41.667 20:30:33 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:41.667 20:30:33 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:41.667 20:30:33 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:41.667 20:30:33 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:41.667 20:30:33 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.667 20:30:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:41.667 20:30:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:41.667 20:30:33 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:38:41.667 20:30:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:49.808 20:30:40 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:49.808 20:30:40 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:38:49.808 20:30:40 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:49.808 20:30:40 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:49.808 20:30:40 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:49.808 20:30:40 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:49.808 20:30:40 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:49.808 20:30:40 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:38:49.808 20:30:40 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:49.808 20:30:40 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:49.809 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:49.809 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:49.809 Found net devices under 0000:31:00.0: cvl_0_0 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:49.809 20:30:40 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:49.809 Found net devices under 0000:31:00.1: cvl_0_1 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:49.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:49.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.736 ms 00:38:49.809 00:38:49.809 --- 10.0.0.2 ping statistics --- 00:38:49.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.809 rtt min/avg/max/mdev = 0.736/0.736/0.736/0.000 ms 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:49.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:49.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:38:49.809 00:38:49.809 --- 10.0.0.1 ping statistics --- 00:38:49.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.809 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:38:49.809 20:30:41 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:53.110 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:53.110 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:53.110 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:53.111 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:53.111 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:53.111 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:53.111 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:53.111 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:53.111 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:53.111 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:38:53.111 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:53.111 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:53.111 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:53.111 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:53.111 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:53.111 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:53.111 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:53.111 20:30:45 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:53.111 20:30:45 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:53.111 20:30:45 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:53.111 20:30:45 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:53.111 20:30:45 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:53.111 20:30:45 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:53.111 20:30:45 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:53.111 20:30:45 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:53.111 20:30:45 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:53.111 20:30:45 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:53.111 20:30:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:53.111 20:30:45 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=342547 00:38:53.111 20:30:45 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 342547 00:38:53.111 20:30:45 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 342547 ']' 00:38:53.111 20:30:45 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:53.111 20:30:45 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:53.111 20:30:45 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:53.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:53.111 20:30:45 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:53.111 20:30:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:53.111 20:30:45 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:53.111 [2024-05-15 20:30:45.519617] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:38:53.111 [2024-05-15 20:30:45.519681] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:53.111 EAL: No free 2048 kB hugepages reported on node 1 00:38:53.371 [2024-05-15 20:30:45.613440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.371 [2024-05-15 20:30:45.709399] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:53.371 [2024-05-15 20:30:45.709457] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:53.371 [2024-05-15 20:30:45.709465] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:53.371 [2024-05-15 20:30:45.709472] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:53.371 [2024-05-15 20:30:45.709478] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:53.371 [2024-05-15 20:30:45.709512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.942 20:30:46 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:53.942 20:30:46 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:38:53.942 20:30:46 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:53.942 20:30:46 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:53.942 20:30:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:53.942 20:30:46 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:53.942 20:30:46 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:53.942 20:30:46 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:53.942 20:30:46 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:53.942 20:30:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:53.942 [2024-05-15 20:30:46.426615] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:53.942 20:30:46 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:53.942 20:30:46 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:53.942 20:30:46 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:53.942 20:30:46 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:53.942 20:30:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:54.202 ************************************ 00:38:54.202 START TEST fio_dif_1_default 00:38:54.202 ************************************ 00:38:54.202 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:38:54.202 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:54.202 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:54.202 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:54.202 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:54.202 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:54.202 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:54.202 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.202 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:54.202 bdev_null0 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:54.203 [2024-05-15 20:30:46.502759] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:38:54.203 [2024-05-15 20:30:46.502955] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:54.203 { 00:38:54.203 "params": { 00:38:54.203 "name": "Nvme$subsystem", 00:38:54.203 "trtype": "$TEST_TRANSPORT", 00:38:54.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:54.203 "adrfam": "ipv4", 00:38:54.203 "trsvcid": "$NVMF_PORT", 00:38:54.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:54.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:54.203 "hdgst": ${hdgst:-false}, 00:38:54.203 "ddgst": ${ddgst:-false} 00:38:54.203 }, 00:38:54.203 "method": "bdev_nvme_attach_controller" 00:38:54.203 } 00:38:54.203 EOF 00:38:54.203 )") 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:54.203 "params": { 00:38:54.203 "name": "Nvme0", 00:38:54.203 "trtype": "tcp", 00:38:54.203 "traddr": "10.0.0.2", 00:38:54.203 "adrfam": "ipv4", 00:38:54.203 "trsvcid": "4420", 00:38:54.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:54.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:54.203 "hdgst": false, 00:38:54.203 "ddgst": false 00:38:54.203 }, 00:38:54.203 "method": "bdev_nvme_attach_controller" 00:38:54.203 }' 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:54.203 20:30:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:54.464 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:54.464 fio-3.35 00:38:54.464 Starting 1 thread 00:38:54.464 EAL: No free 2048 kB hugepages reported on node 1 00:39:06.699 00:39:06.699 filename0: (groupid=0, jobs=1): err= 0: pid=343071: Wed May 15 20:30:57 2024 00:39:06.699 read: IOPS=185, BW=743KiB/s (760kB/s)(7440KiB/10019msec) 00:39:06.699 slat (nsec): min=8186, max=35606, avg=8354.42, stdev=919.55 00:39:06.699 clat (usec): min=899, max=42469, avg=21522.51, stdev=20284.69 00:39:06.699 lat (usec): min=908, max=42505, avg=21530.86, stdev=20284.67 00:39:06.699 clat percentiles (usec): 00:39:06.699 | 1.00th=[ 996], 5.00th=[ 1106], 10.00th=[ 1172], 20.00th=[ 1205], 00:39:06.699 | 30.00th=[ 1221], 40.00th=[ 1254], 50.00th=[41681], 60.00th=[41681], 00:39:06.699 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:39:06.699 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:39:06.699 | 99.99th=[42730] 00:39:06.699 bw ( KiB/s): min= 704, max= 768, per=99.92%, avg=742.40, stdev=32.17, samples=20 00:39:06.699 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:39:06.699 lat (usec) : 1000=1.45% 00:39:06.699 lat (msec) : 2=48.44%, 50=50.11% 00:39:06.699 cpu : usr=95.49%, sys=4.25%, ctx=17, majf=0, minf=215 00:39:06.699 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:06.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:06.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:06.699 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:06.699 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:06.699 00:39:06.699 Run status group 0 (all jobs): 00:39:06.699 READ: bw=743KiB/s (760kB/s), 743KiB/s-743KiB/s (760kB/s-760kB/s), io=7440KiB (7619kB), run=10019-10019msec 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.699 00:39:06.699 real 0m11.052s 00:39:06.699 user 0m20.934s 00:39:06.699 sys 0m0.731s 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 ************************************ 00:39:06.699 END TEST fio_dif_1_default 00:39:06.699 ************************************ 00:39:06.699 20:30:57 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:39:06.699 20:30:57 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:06.699 20:30:57 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:06.699 20:30:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 ************************************ 00:39:06.699 START TEST fio_dif_1_multi_subsystems 00:39:06.699 ************************************ 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 bdev_null0 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 [2024-05-15 20:30:57.641017] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 bdev_null1 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:06.699 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:06.700 { 00:39:06.700 "params": { 00:39:06.700 "name": "Nvme$subsystem", 00:39:06.700 "trtype": "$TEST_TRANSPORT", 00:39:06.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:06.700 "adrfam": "ipv4", 00:39:06.700 "trsvcid": "$NVMF_PORT", 00:39:06.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:06.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:06.700 "hdgst": ${hdgst:-false}, 00:39:06.700 "ddgst": ${ddgst:-false} 00:39:06.700 }, 00:39:06.700 "method": "bdev_nvme_attach_controller" 00:39:06.700 } 00:39:06.700 EOF 00:39:06.700 )") 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:06.700 { 00:39:06.700 "params": { 00:39:06.700 "name": "Nvme$subsystem", 00:39:06.700 "trtype": "$TEST_TRANSPORT", 00:39:06.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:06.700 "adrfam": "ipv4", 00:39:06.700 "trsvcid": "$NVMF_PORT", 00:39:06.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:06.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:06.700 "hdgst": ${hdgst:-false}, 00:39:06.700 "ddgst": ${ddgst:-false} 00:39:06.700 }, 00:39:06.700 "method": "bdev_nvme_attach_controller" 00:39:06.700 } 00:39:06.700 EOF 00:39:06.700 )") 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:06.700 "params": { 00:39:06.700 "name": "Nvme0", 00:39:06.700 "trtype": "tcp", 00:39:06.700 "traddr": "10.0.0.2", 00:39:06.700 "adrfam": "ipv4", 00:39:06.700 "trsvcid": "4420", 00:39:06.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:06.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:06.700 "hdgst": false, 00:39:06.700 "ddgst": false 00:39:06.700 }, 00:39:06.700 "method": "bdev_nvme_attach_controller" 00:39:06.700 },{ 00:39:06.700 "params": { 00:39:06.700 "name": "Nvme1", 00:39:06.700 "trtype": "tcp", 00:39:06.700 "traddr": "10.0.0.2", 00:39:06.700 "adrfam": "ipv4", 00:39:06.700 "trsvcid": "4420", 00:39:06.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:06.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:06.700 "hdgst": false, 00:39:06.700 "ddgst": false 00:39:06.700 }, 00:39:06.700 "method": "bdev_nvme_attach_controller" 00:39:06.700 }' 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:06.700 20:30:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:06.700 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:06.700 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:06.700 fio-3.35 00:39:06.700 Starting 2 threads 00:39:06.700 EAL: No free 2048 kB hugepages reported on node 1 00:39:16.696 00:39:16.696 filename0: (groupid=0, jobs=1): err= 0: pid=345473: Wed May 15 20:31:08 2024 00:39:16.696 read: IOPS=185, BW=743KiB/s (760kB/s)(7440KiB/10020msec) 00:39:16.696 slat (nsec): min=8208, max=32081, avg=8528.85, stdev=1279.57 00:39:16.696 clat (usec): min=1092, max=42273, avg=21524.01, stdev=20202.64 00:39:16.696 lat (usec): min=1101, max=42297, avg=21532.54, stdev=20202.55 00:39:16.696 clat percentiles (usec): 00:39:16.696 | 1.00th=[ 1156], 5.00th=[ 1205], 10.00th=[ 1237], 20.00th=[ 1270], 00:39:16.696 | 30.00th=[ 1303], 40.00th=[ 1319], 50.00th=[41157], 60.00th=[41681], 00:39:16.696 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:39:16.696 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:39:16.696 | 99.99th=[42206] 00:39:16.696 bw ( KiB/s): min= 704, max= 768, per=66.14%, avg=742.40, stdev=30.45, samples=20 00:39:16.696 iops : min= 176, max= 192, avg=185.60, stdev= 7.61, samples=20 00:39:16.696 lat (msec) : 2=49.89%, 50=50.11% 00:39:16.696 cpu : usr=96.85%, sys=2.92%, ctx=9, majf=0, minf=166 00:39:16.696 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:16.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.696 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.696 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:16.696 filename1: (groupid=0, jobs=1): err= 0: pid=345474: Wed May 15 20:31:08 2024 00:39:16.696 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10041msec) 00:39:16.696 slat (nsec): min=8195, max=32093, avg=8614.05, stdev=1686.51 00:39:16.696 clat (usec): min=41795, max=43182, avg=41986.09, stdev=105.96 00:39:16.696 lat (usec): min=41818, max=43209, avg=41994.71, stdev=106.08 00:39:16.696 clat percentiles (usec): 00:39:16.696 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:39:16.696 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:39:16.696 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:16.696 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:39:16.696 | 99.99th=[43254] 00:39:16.696 bw ( KiB/s): min= 352, max= 384, per=33.87%, avg=380.80, stdev= 9.85, samples=20 00:39:16.696 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:39:16.696 lat (msec) : 50=100.00% 00:39:16.696 cpu : usr=97.11%, sys=2.65%, ctx=12, majf=0, minf=94 00:39:16.696 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:16.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.696 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.696 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:16.696 00:39:16.696 Run status group 0 (all jobs): 00:39:16.696 READ: bw=1122KiB/s (1149kB/s), 381KiB/s-743KiB/s (390kB/s-760kB/s), io=11.0MiB (11.5MB), run=10020-10041msec 00:39:16.696 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:39:16.696 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:39:16.696 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:16.696 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:16.696 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:39:16.696 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:16.696 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.696 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:16.696 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.696 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:16.696 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.696 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:16.697 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.697 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:16.697 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:16.697 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:39:16.697 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:16.697 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.697 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:16.697 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.697 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:16.697 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.697 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:16.697 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.697 00:39:16.697 real 0m11.383s 00:39:16.697 user 0m34.489s 00:39:16.697 sys 0m0.865s 00:39:16.697 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:16.697 20:31:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:16.697 ************************************ 00:39:16.697 END TEST fio_dif_1_multi_subsystems 00:39:16.697 ************************************ 00:39:16.697 20:31:09 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:39:16.697 20:31:09 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:16.697 20:31:09 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:16.697 20:31:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:16.697 ************************************ 00:39:16.697 START TEST fio_dif_rand_params 00:39:16.697 ************************************ 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:16.697 bdev_null0 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:16.697 [2024-05-15 20:31:09.099558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:16.697 { 00:39:16.697 "params": { 00:39:16.697 "name": "Nvme$subsystem", 00:39:16.697 "trtype": "$TEST_TRANSPORT", 00:39:16.697 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:16.697 "adrfam": "ipv4", 00:39:16.697 "trsvcid": "$NVMF_PORT", 00:39:16.697 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:16.697 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:16.697 "hdgst": ${hdgst:-false}, 00:39:16.697 "ddgst": ${ddgst:-false} 00:39:16.697 }, 00:39:16.697 "method": "bdev_nvme_attach_controller" 00:39:16.697 } 00:39:16.697 EOF 00:39:16.697 )") 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:16.697 "params": { 00:39:16.697 "name": "Nvme0", 00:39:16.697 "trtype": "tcp", 00:39:16.697 "traddr": "10.0.0.2", 00:39:16.697 "adrfam": "ipv4", 00:39:16.697 "trsvcid": "4420", 00:39:16.697 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:16.697 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:16.697 "hdgst": false, 00:39:16.697 "ddgst": false 00:39:16.697 }, 00:39:16.697 "method": "bdev_nvme_attach_controller" 00:39:16.697 }' 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:16.697 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:16.698 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:16.698 20:31:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:17.266 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:17.266 ... 00:39:17.266 fio-3.35 00:39:17.266 Starting 3 threads 00:39:17.266 EAL: No free 2048 kB hugepages reported on node 1 00:39:23.863 00:39:23.863 filename0: (groupid=0, jobs=1): err= 0: pid=347752: Wed May 15 20:31:15 2024 00:39:23.863 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(130MiB/5007msec) 00:39:23.863 slat (nsec): min=8222, max=33470, avg=8900.96, stdev=1038.00 00:39:23.863 clat (usec): min=4786, max=55488, avg=14389.80, stdev=14443.90 00:39:23.863 lat (usec): min=4794, max=55522, avg=14398.70, stdev=14444.03 00:39:23.863 clat percentiles (usec): 00:39:23.863 | 1.00th=[ 5407], 5.00th=[ 5604], 10.00th=[ 6063], 20.00th=[ 6915], 00:39:23.863 | 30.00th=[ 7635], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9503], 00:39:23.863 | 70.00th=[10290], 80.00th=[11600], 90.00th=[47973], 95.00th=[49546], 00:39:23.863 | 99.00th=[52167], 99.50th=[53216], 99.90th=[55313], 99.95th=[55313], 00:39:23.863 | 99.99th=[55313] 00:39:23.863 bw ( KiB/s): min=16896, max=32000, per=39.20%, avg=26624.00, stdev=4913.90, samples=10 00:39:23.863 iops : min= 132, max= 250, avg=208.00, stdev=38.39, samples=10 00:39:23.863 lat (msec) : 10=65.87%, 20=19.75%, 50=9.78%, 100=4.60% 00:39:23.863 cpu : usr=96.52%, sys=3.22%, ctx=10, majf=0, minf=95 00:39:23.863 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:23.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.863 issued rwts: total=1043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.863 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:23.863 filename0: (groupid=0, jobs=1): err= 0: pid=347753: Wed May 15 20:31:15 2024 00:39:23.863 read: IOPS=165, BW=20.7MiB/s (21.7MB/s)(104MiB/5008msec) 00:39:23.863 slat (nsec): min=8236, max=33239, avg=9033.97, stdev=1494.33 00:39:23.863 clat (usec): min=6422, max=93888, avg=18105.62, stdev=16899.06 00:39:23.863 lat (usec): min=6431, max=93897, avg=18114.65, stdev=16899.02 00:39:23.863 clat percentiles (usec): 00:39:23.863 | 1.00th=[ 7111], 5.00th=[ 7701], 10.00th=[ 8160], 20.00th=[ 9372], 00:39:23.863 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11600], 60.00th=[12387], 00:39:23.863 | 70.00th=[13304], 80.00th=[14746], 90.00th=[50594], 95.00th=[53216], 00:39:23.863 | 99.00th=[90702], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:39:23.863 | 99.99th=[93848] 00:39:23.863 bw ( KiB/s): min=11520, max=31744, per=31.14%, avg=21148.90, stdev=6394.95, samples=10 00:39:23.863 iops : min= 90, max= 248, avg=165.20, stdev=49.98, samples=10 00:39:23.863 lat (msec) : 10=29.07%, 20=54.89%, 50=3.50%, 100=12.55% 00:39:23.863 cpu : usr=95.87%, sys=3.85%, ctx=9, majf=0, minf=98 00:39:23.863 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:23.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.863 issued rwts: total=829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.863 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:23.863 filename0: (groupid=0, jobs=1): err= 0: pid=347754: Wed May 15 20:31:15 2024 00:39:23.863 read: IOPS=156, BW=19.6MiB/s (20.5MB/s)(98.1MiB/5007msec) 00:39:23.863 slat (nsec): min=8236, max=37191, avg=9611.98, stdev=1686.76 00:39:23.863 clat (usec): min=5706, max=95238, avg=19113.42, stdev=18358.29 00:39:23.863 lat (usec): min=5715, max=95246, avg=19123.04, stdev=18358.41 00:39:23.863 clat percentiles (usec): 00:39:23.863 | 1.00th=[ 6259], 5.00th=[ 7701], 10.00th=[ 8291], 20.00th=[ 9372], 00:39:23.863 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11994], 60.00th=[12911], 00:39:23.863 | 70.00th=[13960], 80.00th=[15533], 90.00th=[51119], 95.00th=[53740], 00:39:23.863 | 99.00th=[91751], 99.50th=[92799], 99.90th=[94897], 99.95th=[94897], 00:39:23.863 | 99.99th=[94897] 00:39:23.863 bw ( KiB/s): min= 9984, max=25600, per=29.52%, avg=20044.80, stdev=4766.54, samples=10 00:39:23.863 iops : min= 78, max= 200, avg=156.60, stdev=37.24, samples=10 00:39:23.863 lat (msec) : 10=29.17%, 20=53.63%, 50=3.69%, 100=13.50% 00:39:23.863 cpu : usr=96.14%, sys=3.56%, ctx=10, majf=0, minf=69 00:39:23.863 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:23.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:23.863 issued rwts: total=785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:23.863 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:23.863 00:39:23.863 Run status group 0 (all jobs): 00:39:23.863 READ: bw=66.3MiB/s (69.5MB/s), 19.6MiB/s-26.0MiB/s (20.5MB/s-27.3MB/s), io=332MiB (348MB), run=5007-5008msec 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.864 bdev_null0 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.864 [2024-05-15 20:31:15.267433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.864 bdev_null1 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.864 bdev_null2 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:23.864 { 00:39:23.864 "params": { 00:39:23.864 "name": "Nvme$subsystem", 00:39:23.864 "trtype": "$TEST_TRANSPORT", 00:39:23.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:23.864 "adrfam": "ipv4", 00:39:23.864 "trsvcid": "$NVMF_PORT", 00:39:23.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:23.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:23.864 "hdgst": ${hdgst:-false}, 00:39:23.864 "ddgst": ${ddgst:-false} 00:39:23.864 }, 00:39:23.864 "method": "bdev_nvme_attach_controller" 00:39:23.864 } 00:39:23.864 EOF 00:39:23.864 )") 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:23.864 20:31:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:23.864 { 00:39:23.864 "params": { 00:39:23.864 "name": "Nvme$subsystem", 00:39:23.864 "trtype": "$TEST_TRANSPORT", 00:39:23.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:23.864 "adrfam": "ipv4", 00:39:23.864 "trsvcid": "$NVMF_PORT", 00:39:23.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:23.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:23.864 "hdgst": ${hdgst:-false}, 00:39:23.864 "ddgst": ${ddgst:-false} 00:39:23.865 }, 00:39:23.865 "method": "bdev_nvme_attach_controller" 00:39:23.865 } 00:39:23.865 EOF 00:39:23.865 )") 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:23.865 { 00:39:23.865 "params": { 00:39:23.865 "name": "Nvme$subsystem", 00:39:23.865 "trtype": "$TEST_TRANSPORT", 00:39:23.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:23.865 "adrfam": "ipv4", 00:39:23.865 "trsvcid": "$NVMF_PORT", 00:39:23.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:23.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:23.865 "hdgst": ${hdgst:-false}, 00:39:23.865 "ddgst": ${ddgst:-false} 00:39:23.865 }, 00:39:23.865 "method": "bdev_nvme_attach_controller" 00:39:23.865 } 00:39:23.865 EOF 00:39:23.865 )") 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:23.865 "params": { 00:39:23.865 "name": "Nvme0", 00:39:23.865 "trtype": "tcp", 00:39:23.865 "traddr": "10.0.0.2", 00:39:23.865 "adrfam": "ipv4", 00:39:23.865 "trsvcid": "4420", 00:39:23.865 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:23.865 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:23.865 "hdgst": false, 00:39:23.865 "ddgst": false 00:39:23.865 }, 00:39:23.865 "method": "bdev_nvme_attach_controller" 00:39:23.865 },{ 00:39:23.865 "params": { 00:39:23.865 "name": "Nvme1", 00:39:23.865 "trtype": "tcp", 00:39:23.865 "traddr": "10.0.0.2", 00:39:23.865 "adrfam": "ipv4", 00:39:23.865 "trsvcid": "4420", 00:39:23.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:23.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:23.865 "hdgst": false, 00:39:23.865 "ddgst": false 00:39:23.865 }, 00:39:23.865 "method": "bdev_nvme_attach_controller" 00:39:23.865 },{ 00:39:23.865 "params": { 00:39:23.865 "name": "Nvme2", 00:39:23.865 "trtype": "tcp", 00:39:23.865 "traddr": "10.0.0.2", 00:39:23.865 "adrfam": "ipv4", 00:39:23.865 "trsvcid": "4420", 00:39:23.865 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:23.865 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:23.865 "hdgst": false, 00:39:23.865 "ddgst": false 00:39:23.865 }, 00:39:23.865 "method": "bdev_nvme_attach_controller" 00:39:23.865 }' 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:23.865 20:31:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:23.865 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:23.865 ... 00:39:23.865 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:23.865 ... 00:39:23.865 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:23.865 ... 00:39:23.865 fio-3.35 00:39:23.865 Starting 24 threads 00:39:23.865 EAL: No free 2048 kB hugepages reported on node 1 00:39:36.103 00:39:36.103 filename0: (groupid=0, jobs=1): err= 0: pid=349095: Wed May 15 20:31:26 2024 00:39:36.103 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10006msec) 00:39:36.103 slat (nsec): min=8246, max=78096, avg=10964.89, stdev=6276.76 00:39:36.103 clat (usec): min=4233, max=34244, avg=31873.07, stdev=3026.02 00:39:36.103 lat (usec): min=4249, max=34254, avg=31884.03, stdev=3025.20 00:39:36.103 clat percentiles (usec): 00:39:36.103 | 1.00th=[12911], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:39:36.103 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:36.103 | 70.00th=[32637], 80.00th=[32900], 90.00th=[32900], 95.00th=[33162], 00:39:36.103 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:39:36.103 | 99.99th=[34341] 00:39:36.103 bw ( KiB/s): min= 1920, max= 2304, per=4.18%, avg=2000.84, stdev=97.39, samples=19 00:39:36.103 iops : min= 480, max= 576, avg=500.21, stdev=24.35, samples=19 00:39:36.103 lat (msec) : 10=0.64%, 20=0.96%, 50=98.40% 00:39:36.103 cpu : usr=99.21%, sys=0.49%, ctx=14, majf=0, minf=72 00:39:36.103 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:36.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.103 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.103 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.103 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.103 filename0: (groupid=0, jobs=1): err= 0: pid=349097: Wed May 15 20:31:26 2024 00:39:36.103 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10024msec) 00:39:36.103 slat (nsec): min=8277, max=84730, avg=23814.74, stdev=15332.07 00:39:36.103 clat (usec): min=4176, max=33719, avg=31729.99, stdev=3543.40 00:39:36.103 lat (usec): min=4197, max=33729, avg=31753.81, stdev=3543.70 00:39:36.103 clat percentiles (usec): 00:39:36.103 | 1.00th=[ 4621], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:39:36.103 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:36.103 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[32900], 00:39:36.103 | 99.00th=[33424], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:39:36.103 | 99.99th=[33817] 00:39:36.103 bw ( KiB/s): min= 1920, max= 2560, per=4.18%, avg=2003.20, stdev=145.50, samples=20 00:39:36.103 iops : min= 480, max= 640, avg=500.80, stdev=36.37, samples=20 00:39:36.103 lat (msec) : 10=1.59%, 50=98.41% 00:39:36.103 cpu : usr=99.04%, sys=0.65%, ctx=6, majf=0, minf=56 00:39:36.104 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:36.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.104 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.104 filename0: (groupid=0, jobs=1): err= 0: pid=349098: Wed May 15 20:31:26 2024 00:39:36.104 read: IOPS=494, BW=1976KiB/s (2024kB/s)(19.3MiB/10006msec) 00:39:36.104 slat (usec): min=8, max=101, avg=27.01, stdev=14.52 00:39:36.104 clat (usec): min=21372, max=46916, avg=32134.87, stdev=883.56 00:39:36.104 lat (usec): min=21383, max=46943, avg=32161.88, stdev=883.66 00:39:36.104 clat percentiles (usec): 00:39:36.104 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:39:36.104 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:36.104 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:36.104 | 99.00th=[33424], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:39:36.104 | 99.99th=[46924] 00:39:36.104 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1973.89, stdev=64.93, samples=19 00:39:36.104 iops : min= 480, max= 512, avg=493.47, stdev=16.23, samples=19 00:39:36.104 lat (msec) : 50=100.00% 00:39:36.104 cpu : usr=98.08%, sys=1.01%, ctx=35, majf=0, minf=79 00:39:36.104 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:36.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.104 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.104 filename0: (groupid=0, jobs=1): err= 0: pid=349099: Wed May 15 20:31:26 2024 00:39:36.104 read: IOPS=494, BW=1976KiB/s (2024kB/s)(19.3MiB/10007msec) 00:39:36.104 slat (usec): min=6, max=100, avg=27.86, stdev=15.37 00:39:36.104 clat (usec): min=8121, max=55801, avg=32114.03, stdev=2071.00 00:39:36.104 lat (usec): min=8148, max=55820, avg=32141.89, stdev=2070.64 00:39:36.104 clat percentiles (usec): 00:39:36.104 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:39:36.104 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:39:36.104 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:36.104 | 99.00th=[33817], 99.50th=[34341], 99.90th=[55837], 99.95th=[55837], 00:39:36.104 | 99.99th=[55837] 00:39:36.104 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1967.16, stdev=76.45, samples=19 00:39:36.104 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:39:36.104 lat (msec) : 10=0.32%, 50=99.35%, 100=0.32% 00:39:36.104 cpu : usr=97.35%, sys=1.36%, ctx=51, majf=0, minf=69 00:39:36.104 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:36.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.104 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.104 filename0: (groupid=0, jobs=1): err= 0: pid=349100: Wed May 15 20:31:26 2024 00:39:36.104 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.2MiB/10002msec) 00:39:36.104 slat (nsec): min=5751, max=99021, avg=24652.25, stdev=14529.63 00:39:36.104 clat (usec): min=21835, max=66293, avg=32257.04, stdev=1297.01 00:39:36.104 lat (usec): min=21845, max=66309, avg=32281.69, stdev=1295.87 00:39:36.104 clat percentiles (usec): 00:39:36.104 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:39:36.104 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:36.104 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:36.104 | 99.00th=[33817], 99.50th=[34341], 99.90th=[50070], 99.95th=[50070], 00:39:36.104 | 99.99th=[66323] 00:39:36.104 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1967.16, stdev=76.45, samples=19 00:39:36.104 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:39:36.104 lat (msec) : 50=99.96%, 100=0.04% 00:39:36.104 cpu : usr=95.31%, sys=2.30%, ctx=86, majf=0, minf=63 00:39:36.104 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:36.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.104 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.104 filename0: (groupid=0, jobs=1): err= 0: pid=349101: Wed May 15 20:31:26 2024 00:39:36.104 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.2MiB/10002msec) 00:39:36.104 slat (nsec): min=5859, max=99726, avg=16273.91, stdev=13132.60 00:39:36.104 clat (usec): min=29640, max=49166, avg=32342.54, stdev=1086.44 00:39:36.104 lat (usec): min=29687, max=49182, avg=32358.81, stdev=1085.12 00:39:36.104 clat percentiles (usec): 00:39:36.104 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:39:36.104 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:36.104 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:36.104 | 99.00th=[33817], 99.50th=[34341], 99.90th=[49021], 99.95th=[49021], 00:39:36.104 | 99.99th=[49021] 00:39:36.104 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1967.16, stdev=76.45, samples=19 00:39:36.104 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:39:36.104 lat (msec) : 50=100.00% 00:39:36.104 cpu : usr=97.98%, sys=1.15%, ctx=756, majf=0, minf=67 00:39:36.104 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:36.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.104 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.104 filename0: (groupid=0, jobs=1): err= 0: pid=349102: Wed May 15 20:31:26 2024 00:39:36.104 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.4MiB/10025msec) 00:39:36.104 slat (nsec): min=4979, max=87424, avg=28156.14, stdev=15506.67 00:39:36.104 clat (usec): min=21268, max=33665, avg=32072.16, stdev=997.56 00:39:36.104 lat (usec): min=21277, max=33694, avg=32100.31, stdev=998.41 00:39:36.104 clat percentiles (usec): 00:39:36.104 | 1.00th=[28705], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:39:36.104 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:36.104 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[32900], 00:39:36.104 | 99.00th=[33424], 99.50th=[33424], 99.90th=[33817], 99.95th=[33817], 00:39:36.104 | 99.99th=[33817] 00:39:36.104 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1977.60, stdev=65.33, samples=20 00:39:36.104 iops : min= 480, max= 512, avg=494.40, stdev=16.33, samples=20 00:39:36.104 lat (msec) : 50=100.00% 00:39:36.104 cpu : usr=98.86%, sys=0.67%, ctx=128, majf=0, minf=58 00:39:36.104 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:36.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.104 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.104 filename0: (groupid=0, jobs=1): err= 0: pid=349103: Wed May 15 20:31:26 2024 00:39:36.104 read: IOPS=499, BW=1997KiB/s (2045kB/s)(19.6MiB/10029msec) 00:39:36.104 slat (usec): min=6, max=103, avg=20.13, stdev=15.28 00:39:36.104 clat (usec): min=13272, max=52759, avg=31897.98, stdev=4710.06 00:39:36.104 lat (usec): min=13298, max=52783, avg=31918.11, stdev=4712.20 00:39:36.104 clat percentiles (usec): 00:39:36.104 | 1.00th=[18744], 5.00th=[22152], 10.00th=[26346], 20.00th=[31589], 00:39:36.104 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:36.104 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[40109], 00:39:36.104 | 99.00th=[47973], 99.50th=[50594], 99.90th=[52691], 99.95th=[52691], 00:39:36.104 | 99.99th=[52691] 00:39:36.104 bw ( KiB/s): min= 1920, max= 2128, per=4.17%, avg=1997.60, stdev=60.38, samples=20 00:39:36.104 iops : min= 480, max= 532, avg=499.40, stdev=15.09, samples=20 00:39:36.104 lat (msec) : 20=2.38%, 50=96.87%, 100=0.76% 00:39:36.104 cpu : usr=98.41%, sys=0.97%, ctx=22, majf=0, minf=83 00:39:36.104 IO depths : 1=1.9%, 2=4.5%, 4=13.9%, 8=67.4%, 16=12.3%, 32=0.0%, >=64=0.0% 00:39:36.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 complete : 0=0.0%, 4=92.1%, 8=3.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.104 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.104 filename1: (groupid=0, jobs=1): err= 0: pid=349104: Wed May 15 20:31:26 2024 00:39:36.104 read: IOPS=555, BW=2222KiB/s (2276kB/s)(21.7MiB/10009msec) 00:39:36.104 slat (nsec): min=8216, max=76603, avg=15229.78, stdev=11025.83 00:39:36.104 clat (usec): min=2796, max=52297, avg=28692.55, stdev=6320.28 00:39:36.104 lat (usec): min=2809, max=52307, avg=28707.78, stdev=6323.60 00:39:36.104 clat percentiles (usec): 00:39:36.104 | 1.00th=[ 4621], 5.00th=[20841], 10.00th=[21365], 20.00th=[21890], 00:39:36.104 | 30.00th=[22938], 40.00th=[31327], 50.00th=[31851], 60.00th=[32113], 00:39:36.104 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33817], 00:39:36.104 | 99.00th=[44827], 99.50th=[48497], 99.90th=[52167], 99.95th=[52167], 00:39:36.104 | 99.99th=[52167] 00:39:36.104 bw ( KiB/s): min= 1920, max= 3408, per=4.66%, avg=2233.68, stdev=419.65, samples=19 00:39:36.104 iops : min= 480, max= 852, avg=558.42, stdev=104.91, samples=19 00:39:36.104 lat (msec) : 4=0.50%, 10=0.99%, 20=2.05%, 50=96.24%, 100=0.22% 00:39:36.104 cpu : usr=98.65%, sys=1.03%, ctx=17, majf=0, minf=71 00:39:36.104 IO depths : 1=2.8%, 2=5.8%, 4=15.3%, 8=66.1%, 16=10.0%, 32=0.0%, >=64=0.0% 00:39:36.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 complete : 0=0.0%, 4=91.5%, 8=3.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.104 issued rwts: total=5561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.104 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.104 filename1: (groupid=0, jobs=1): err= 0: pid=349105: Wed May 15 20:31:26 2024 00:39:36.104 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.4MiB/10025msec) 00:39:36.104 slat (nsec): min=8283, max=83876, avg=17968.87, stdev=12195.01 00:39:36.104 clat (usec): min=21610, max=43729, avg=32198.32, stdev=1514.13 00:39:36.104 lat (usec): min=21620, max=43777, avg=32216.29, stdev=1513.82 00:39:36.104 clat percentiles (usec): 00:39:36.104 | 1.00th=[25035], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:39:36.104 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:36.104 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:36.104 | 99.00th=[38536], 99.50th=[39060], 99.90th=[41681], 99.95th=[41681], 00:39:36.104 | 99.99th=[43779] 00:39:36.104 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1977.60, stdev=65.33, samples=20 00:39:36.104 iops : min= 480, max= 512, avg=494.40, stdev=16.33, samples=20 00:39:36.104 lat (msec) : 50=100.00% 00:39:36.105 cpu : usr=98.95%, sys=0.72%, ctx=18, majf=0, minf=62 00:39:36.105 IO depths : 1=5.3%, 2=11.0%, 4=23.6%, 8=52.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:39:36.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.105 complete : 0=0.0%, 4=93.7%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.105 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.105 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.105 filename1: (groupid=0, jobs=1): err= 0: pid=349106: Wed May 15 20:31:26 2024 00:39:36.105 read: IOPS=494, BW=1976KiB/s (2024kB/s)(19.3MiB/10006msec) 00:39:36.105 slat (nsec): min=8236, max=96915, avg=22405.30, stdev=13706.02 00:39:36.105 clat (usec): min=21095, max=42923, avg=32191.90, stdev=871.03 00:39:36.105 lat (usec): min=21113, max=42943, avg=32214.30, stdev=870.66 00:39:36.105 clat percentiles (usec): 00:39:36.105 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:39:36.105 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:36.105 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:36.105 | 99.00th=[33424], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:39:36.105 | 99.99th=[42730] 00:39:36.105 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1973.89, stdev=64.93, samples=19 00:39:36.105 iops : min= 480, max= 512, avg=493.47, stdev=16.23, samples=19 00:39:36.105 lat (msec) : 50=100.00% 00:39:36.105 cpu : usr=99.22%, sys=0.49%, ctx=12, majf=0, minf=77 00:39:36.105 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:36.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.105 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.105 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.105 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.105 filename1: (groupid=0, jobs=1): err= 0: pid=349108: Wed May 15 20:31:26 2024 00:39:36.105 read: IOPS=499, BW=1999KiB/s (2047kB/s)(19.5MiB/10005msec) 00:39:36.105 slat (usec): min=7, max=503, avg=15.85, stdev=13.02 00:39:36.105 clat (usec): min=14734, max=62596, avg=31926.43, stdev=4929.16 00:39:36.105 lat (usec): min=14742, max=62616, avg=31942.28, stdev=4929.13 00:39:36.105 clat percentiles (usec): 00:39:36.105 | 1.00th=[19530], 5.00th=[22938], 10.00th=[26084], 20.00th=[29230], 00:39:36.105 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:36.105 | 70.00th=[32637], 80.00th=[32900], 90.00th=[37487], 95.00th=[40109], 00:39:36.105 | 99.00th=[47449], 99.50th=[49546], 99.90th=[62653], 99.95th=[62653], 00:39:36.105 | 99.99th=[62653] 00:39:36.105 bw ( KiB/s): min= 1808, max= 2160, per=4.17%, avg=1994.11, stdev=82.83, samples=19 00:39:36.105 iops : min= 452, max= 540, avg=498.53, stdev=20.71, samples=19 00:39:36.105 lat (msec) : 20=1.62%, 50=97.94%, 100=0.44% 00:39:36.105 cpu : usr=98.74%, sys=0.86%, ctx=35, majf=0, minf=66 00:39:36.105 IO depths : 1=1.0%, 2=2.1%, 4=8.0%, 8=74.5%, 16=14.4%, 32=0.0%, >=64=0.0% 00:39:36.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.105 complete : 0=0.0%, 4=90.3%, 8=6.8%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.105 issued rwts: total=5000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.105 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.105 filename1: (groupid=0, jobs=1): err= 0: pid=349109: Wed May 15 20:31:26 2024 00:39:36.105 read: IOPS=495, BW=1981KiB/s (2028kB/s)(19.4MiB/10017msec) 00:39:36.105 slat (nsec): min=8242, max=73468, avg=13709.13, stdev=9934.63 00:39:36.105 clat (usec): min=18219, max=45122, avg=32196.24, stdev=1239.14 00:39:36.105 lat (usec): min=18228, max=45133, avg=32209.95, stdev=1238.45 00:39:36.105 clat percentiles (usec): 00:39:36.105 | 1.00th=[23987], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:39:36.105 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:39:36.105 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:36.105 | 99.00th=[33817], 99.50th=[33817], 99.90th=[37487], 99.95th=[38011], 00:39:36.105 | 99.99th=[45351] 00:39:36.105 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1977.60, stdev=65.33, samples=20 00:39:36.105 iops : min= 480, max= 512, avg=494.40, stdev=16.33, samples=20 00:39:36.105 lat (msec) : 20=0.04%, 50=99.96% 00:39:36.105 cpu : usr=99.15%, sys=0.55%, ctx=12, majf=0, minf=63 00:39:36.105 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:39:36.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.105 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.105 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.105 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.105 filename1: (groupid=0, jobs=1): err= 0: pid=349110: Wed May 15 20:31:26 2024 00:39:36.105 read: IOPS=501, BW=2007KiB/s (2055kB/s)(19.6MiB/10006msec) 00:39:36.105 slat (nsec): min=5762, max=94755, avg=20408.12, stdev=14536.85 00:39:36.105 clat (usec): min=9699, max=66512, avg=31716.23, stdev=3695.65 00:39:36.105 lat (usec): min=9708, max=66528, avg=31736.63, stdev=3696.63 00:39:36.105 clat percentiles (usec): 00:39:36.105 | 1.00th=[18482], 5.00th=[23725], 10.00th=[31327], 20.00th=[31589], 00:39:36.105 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:39:36.105 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:36.105 | 99.00th=[42730], 99.50th=[46400], 99.90th=[66323], 99.95th=[66323], 00:39:36.105 | 99.99th=[66323] 00:39:36.105 bw ( KiB/s): min= 1795, max= 2240, per=4.19%, avg=2006.05, stdev=102.97, samples=19 00:39:36.105 iops : min= 448, max= 560, avg=501.47, stdev=25.83, samples=19 00:39:36.105 lat (msec) : 10=0.14%, 20=1.55%, 50=97.99%, 100=0.32% 00:39:36.105 cpu : usr=99.16%, sys=0.52%, ctx=36, majf=0, minf=56 00:39:36.105 IO depths : 1=5.0%, 2=10.1%, 4=20.9%, 8=55.9%, 16=8.0%, 32=0.0%, >=64=0.0% 00:39:36.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.105 complete : 0=0.0%, 4=93.1%, 8=1.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.105 issued rwts: total=5020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.105 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.105 filename1: (groupid=0, jobs=1): err= 0: pid=349111: Wed May 15 20:31:26 2024 00:39:36.105 read: IOPS=492, BW=1970KiB/s (2018kB/s)(19.2MiB/10004msec) 00:39:36.105 slat (nsec): min=6342, max=86467, avg=24814.87, stdev=15156.15 00:39:36.105 clat (usec): min=6916, max=63893, avg=32268.55, stdev=2811.31 00:39:36.105 lat (usec): min=6925, max=63911, avg=32293.36, stdev=2811.29 00:39:36.105 clat percentiles (usec): 00:39:36.105 | 1.00th=[24511], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:39:36.105 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:36.105 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:36.105 | 99.00th=[41157], 99.50th=[46400], 99.90th=[63701], 99.95th=[63701], 00:39:36.105 | 99.99th=[63701] 00:39:36.105 bw ( KiB/s): min= 1795, max= 2048, per=4.10%, avg=1963.11, stdev=69.76, samples=19 00:39:36.105 iops : min= 448, max= 512, avg=490.74, stdev=17.54, samples=19 00:39:36.105 lat (msec) : 10=0.20%, 20=0.28%, 50=99.19%, 100=0.32% 00:39:36.105 cpu : usr=98.89%, sys=0.76%, ctx=69, majf=0, minf=70 00:39:36.105 IO depths : 1=3.8%, 2=8.7%, 4=21.2%, 8=56.8%, 16=9.6%, 32=0.0%, >=64=0.0% 00:39:36.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.105 complete : 0=0.0%, 4=93.5%, 8=1.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.105 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.105 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.105 filename1: (groupid=0, jobs=1): err= 0: pid=349112: Wed May 15 20:31:26 2024 00:39:36.105 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.2MiB/10002msec) 00:39:36.106 slat (nsec): min=8105, max=95822, avg=28093.66, stdev=15445.63 00:39:36.106 clat (usec): min=21810, max=49193, avg=32206.73, stdev=1145.65 00:39:36.106 lat (usec): min=21821, max=49215, avg=32234.82, stdev=1144.83 00:39:36.106 clat percentiles (usec): 00:39:36.106 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:39:36.106 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:39:36.106 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:36.106 | 99.00th=[33817], 99.50th=[33817], 99.90th=[49021], 99.95th=[49021], 00:39:36.106 | 99.99th=[49021] 00:39:36.106 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1967.32, stdev=76.07, samples=19 00:39:36.106 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:39:36.106 lat (msec) : 50=100.00% 00:39:36.106 cpu : usr=99.03%, sys=0.65%, ctx=10, majf=0, minf=66 00:39:36.106 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:36.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.106 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.106 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.106 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.106 filename2: (groupid=0, jobs=1): err= 0: pid=349113: Wed May 15 20:31:26 2024 00:39:36.106 read: IOPS=505, BW=2022KiB/s (2070kB/s)(19.8MiB/10004msec) 00:39:36.106 slat (usec): min=6, max=141, avg=17.65, stdev=12.73 00:39:36.106 clat (usec): min=5788, max=79541, avg=31525.26, stdev=4845.08 00:39:36.106 lat (usec): min=5797, max=79557, avg=31542.91, stdev=4845.91 00:39:36.106 clat percentiles (usec): 00:39:36.106 | 1.00th=[17433], 5.00th=[22414], 10.00th=[25822], 20.00th=[31327], 00:39:36.106 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:36.106 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[38536], 00:39:36.106 | 99.00th=[45351], 99.50th=[47973], 99.90th=[63701], 99.95th=[63701], 00:39:36.106 | 99.99th=[79168] 00:39:36.106 bw ( KiB/s): min= 1843, max= 2208, per=4.21%, avg=2015.32, stdev=102.23, samples=19 00:39:36.106 iops : min= 460, max= 552, avg=503.79, stdev=25.63, samples=19 00:39:36.106 lat (msec) : 10=0.45%, 20=1.54%, 50=97.69%, 100=0.32% 00:39:36.106 cpu : usr=97.70%, sys=1.16%, ctx=58, majf=0, minf=52 00:39:36.106 IO depths : 1=3.0%, 2=6.4%, 4=14.9%, 8=64.9%, 16=10.9%, 32=0.0%, >=64=0.0% 00:39:36.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.106 complete : 0=0.0%, 4=91.6%, 8=4.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.106 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.106 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.106 filename2: (groupid=0, jobs=1): err= 0: pid=349114: Wed May 15 20:31:26 2024 00:39:36.106 read: IOPS=522, BW=2088KiB/s (2138kB/s)(20.4MiB/10003msec) 00:39:36.106 slat (nsec): min=8119, max=95524, avg=18290.61, stdev=12934.41 00:39:36.106 clat (usec): min=4516, max=63503, avg=30498.18, stdev=5071.63 00:39:36.106 lat (usec): min=4525, max=63524, avg=30516.47, stdev=5074.08 00:39:36.106 clat percentiles (usec): 00:39:36.106 | 1.00th=[18220], 5.00th=[20579], 10.00th=[22414], 20.00th=[27132], 00:39:36.106 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:39:36.106 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:36.106 | 99.00th=[44827], 99.50th=[45876], 99.90th=[63701], 99.95th=[63701], 00:39:36.106 | 99.99th=[63701] 00:39:36.106 bw ( KiB/s): min= 1792, max= 2368, per=4.31%, avg=2062.32, stdev=183.43, samples=19 00:39:36.106 iops : min= 448, max= 592, avg=515.58, stdev=45.86, samples=19 00:39:36.106 lat (msec) : 10=0.31%, 20=3.14%, 50=96.25%, 100=0.31% 00:39:36.106 cpu : usr=98.88%, sys=0.78%, ctx=50, majf=0, minf=98 00:39:36.106 IO depths : 1=3.9%, 2=8.1%, 4=18.1%, 8=60.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:39:36.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.106 complete : 0=0.0%, 4=92.2%, 8=2.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.106 issued rwts: total=5222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.106 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.106 filename2: (groupid=0, jobs=1): err= 0: pid=349115: Wed May 15 20:31:26 2024 00:39:36.106 read: IOPS=493, BW=1973KiB/s (2021kB/s)(19.3MiB/10013msec) 00:39:36.106 slat (nsec): min=8103, max=98291, avg=17381.80, stdev=13618.04 00:39:36.106 clat (usec): min=14001, max=49706, avg=32358.29, stdev=1146.48 00:39:36.106 lat (usec): min=14043, max=49742, avg=32375.67, stdev=1145.98 00:39:36.106 clat percentiles (usec): 00:39:36.106 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:39:36.106 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:39:36.106 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:36.106 | 99.00th=[33817], 99.50th=[36439], 99.90th=[45351], 99.95th=[45351], 00:39:36.106 | 99.99th=[49546] 00:39:36.106 bw ( KiB/s): min= 1840, max= 2016, per=4.12%, avg=1972.21, stdev=38.40, samples=19 00:39:36.106 iops : min= 460, max= 504, avg=493.05, stdev= 9.60, samples=19 00:39:36.106 lat (msec) : 20=0.12%, 50=99.88% 00:39:36.106 cpu : usr=95.11%, sys=2.49%, ctx=150, majf=0, minf=70 00:39:36.106 IO depths : 1=0.2%, 2=0.3%, 4=0.9%, 8=80.3%, 16=18.4%, 32=0.0%, >=64=0.0% 00:39:36.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.106 complete : 0=0.0%, 4=89.6%, 8=10.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.106 issued rwts: total=4940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.106 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.106 filename2: (groupid=0, jobs=1): err= 0: pid=349116: Wed May 15 20:31:26 2024 00:39:36.106 read: IOPS=496, BW=1987KiB/s (2035kB/s)(19.4MiB/10004msec) 00:39:36.106 slat (nsec): min=8244, max=97724, avg=25080.87, stdev=15687.10 00:39:36.106 clat (usec): min=8115, max=64858, avg=31975.62, stdev=3168.18 00:39:36.106 lat (usec): min=8127, max=64880, avg=32000.70, stdev=3168.20 00:39:36.106 clat percentiles (usec): 00:39:36.106 | 1.00th=[21627], 5.00th=[27657], 10.00th=[31327], 20.00th=[31589], 00:39:36.106 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:36.106 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:36.106 | 99.00th=[39060], 99.50th=[43254], 99.90th=[64750], 99.95th=[64750], 00:39:36.106 | 99.99th=[64750] 00:39:36.106 bw ( KiB/s): min= 1792, max= 2112, per=4.13%, avg=1978.11, stdev=80.57, samples=19 00:39:36.106 iops : min= 448, max= 528, avg=494.53, stdev=20.14, samples=19 00:39:36.106 lat (msec) : 10=0.32%, 20=0.28%, 50=98.91%, 100=0.48% 00:39:36.106 cpu : usr=99.01%, sys=0.69%, ctx=14, majf=0, minf=71 00:39:36.106 IO depths : 1=5.1%, 2=10.3%, 4=21.4%, 8=55.2%, 16=8.0%, 32=0.0%, >=64=0.0% 00:39:36.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.106 complete : 0=0.0%, 4=93.2%, 8=1.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.106 issued rwts: total=4970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.106 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.106 filename2: (groupid=0, jobs=1): err= 0: pid=349117: Wed May 15 20:31:26 2024 00:39:36.106 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.4MiB/10024msec) 00:39:36.106 slat (nsec): min=5312, max=87845, avg=26595.24, stdev=14585.64 00:39:36.106 clat (usec): min=22128, max=41326, avg=32080.32, stdev=1097.88 00:39:36.106 lat (usec): min=22138, max=41369, avg=32106.92, stdev=1098.88 00:39:36.106 clat percentiles (usec): 00:39:36.106 | 1.00th=[25035], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:39:36.106 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:36.106 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[32900], 00:39:36.106 | 99.00th=[33424], 99.50th=[33817], 99.90th=[39060], 99.95th=[40109], 00:39:36.106 | 99.99th=[41157] 00:39:36.107 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1977.75, stdev=65.20, samples=20 00:39:36.107 iops : min= 480, max= 512, avg=494.40, stdev=16.33, samples=20 00:39:36.107 lat (msec) : 50=100.00% 00:39:36.107 cpu : usr=98.92%, sys=0.70%, ctx=70, majf=0, minf=69 00:39:36.107 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:36.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.107 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.107 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.107 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.107 filename2: (groupid=0, jobs=1): err= 0: pid=349118: Wed May 15 20:31:26 2024 00:39:36.107 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.2MiB/10001msec) 00:39:36.107 slat (nsec): min=6626, max=78782, avg=18883.72, stdev=11814.94 00:39:36.107 clat (usec): min=18160, max=63370, avg=32305.07, stdev=2053.08 00:39:36.107 lat (usec): min=18170, max=63389, avg=32323.96, stdev=2052.65 00:39:36.107 clat percentiles (usec): 00:39:36.107 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:39:36.107 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:36.107 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:36.107 | 99.00th=[34341], 99.50th=[34341], 99.90th=[63177], 99.95th=[63177], 00:39:36.107 | 99.99th=[63177] 00:39:36.107 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1967.16, stdev=76.45, samples=19 00:39:36.107 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:39:36.107 lat (msec) : 20=0.37%, 50=99.31%, 100=0.32% 00:39:36.107 cpu : usr=98.90%, sys=0.79%, ctx=14, majf=0, minf=58 00:39:36.107 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:36.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.107 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.107 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.107 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.107 filename2: (groupid=0, jobs=1): err= 0: pid=349119: Wed May 15 20:31:26 2024 00:39:36.107 read: IOPS=493, BW=1973KiB/s (2020kB/s)(19.3MiB/10023msec) 00:39:36.107 slat (nsec): min=5523, max=87662, avg=25238.15, stdev=13576.46 00:39:36.107 clat (usec): min=23394, max=49377, avg=32203.07, stdev=1264.67 00:39:36.107 lat (usec): min=23403, max=49392, avg=32228.31, stdev=1264.11 00:39:36.107 clat percentiles (usec): 00:39:36.107 | 1.00th=[30540], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:39:36.107 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:36.107 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[32900], 00:39:36.107 | 99.00th=[33424], 99.50th=[33817], 99.90th=[49546], 99.95th=[49546], 00:39:36.107 | 99.99th=[49546] 00:39:36.107 bw ( KiB/s): min= 1916, max= 2048, per=4.12%, avg=1971.00, stdev=64.51, samples=20 00:39:36.107 iops : min= 479, max= 512, avg=492.75, stdev=16.13, samples=20 00:39:36.107 lat (msec) : 50=100.00% 00:39:36.107 cpu : usr=98.75%, sys=0.87%, ctx=69, majf=0, minf=44 00:39:36.107 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:36.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.107 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.107 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.107 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.107 filename2: (groupid=0, jobs=1): err= 0: pid=349120: Wed May 15 20:31:26 2024 00:39:36.107 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.4MiB/10025msec) 00:39:36.107 slat (nsec): min=6794, max=95173, avg=22844.57, stdev=15640.91 00:39:36.107 clat (usec): min=21265, max=38481, avg=32146.77, stdev=1008.65 00:39:36.107 lat (usec): min=21274, max=38491, avg=32169.62, stdev=1008.38 00:39:36.107 clat percentiles (usec): 00:39:36.107 | 1.00th=[25560], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:39:36.107 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:36.107 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[32900], 00:39:36.107 | 99.00th=[33424], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:39:36.107 | 99.99th=[38536] 00:39:36.107 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1977.60, stdev=65.33, samples=20 00:39:36.107 iops : min= 480, max= 512, avg=494.40, stdev=16.33, samples=20 00:39:36.107 lat (msec) : 50=100.00% 00:39:36.107 cpu : usr=97.05%, sys=1.57%, ctx=26, majf=0, minf=48 00:39:36.107 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:36.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.107 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:36.107 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:36.107 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:36.107 00:39:36.107 Run status group 0 (all jobs): 00:39:36.107 READ: bw=46.7MiB/s (49.0MB/s), 1970KiB/s-2222KiB/s (2018kB/s-2276kB/s), io=469MiB (492MB), run=10001-10029msec 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.107 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.108 bdev_null0 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.108 [2024-05-15 20:31:26.911687] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.108 bdev_null1 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:36.108 { 00:39:36.108 "params": { 00:39:36.108 "name": "Nvme$subsystem", 00:39:36.108 "trtype": "$TEST_TRANSPORT", 00:39:36.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:36.108 "adrfam": "ipv4", 00:39:36.108 "trsvcid": "$NVMF_PORT", 00:39:36.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:36.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:36.108 "hdgst": ${hdgst:-false}, 00:39:36.108 "ddgst": ${ddgst:-false} 00:39:36.108 }, 00:39:36.108 "method": "bdev_nvme_attach_controller" 00:39:36.108 } 00:39:36.108 EOF 00:39:36.108 )") 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:36.108 20:31:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:36.108 { 00:39:36.109 "params": { 00:39:36.109 "name": "Nvme$subsystem", 00:39:36.109 "trtype": "$TEST_TRANSPORT", 00:39:36.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:36.109 "adrfam": "ipv4", 00:39:36.109 "trsvcid": "$NVMF_PORT", 00:39:36.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:36.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:36.109 "hdgst": ${hdgst:-false}, 00:39:36.109 "ddgst": ${ddgst:-false} 00:39:36.109 }, 00:39:36.109 "method": "bdev_nvme_attach_controller" 00:39:36.109 } 00:39:36.109 EOF 00:39:36.109 )") 00:39:36.109 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:36.109 20:31:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:36.109 20:31:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:36.109 20:31:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:39:36.109 20:31:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:39:36.109 20:31:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:36.109 "params": { 00:39:36.109 "name": "Nvme0", 00:39:36.109 "trtype": "tcp", 00:39:36.109 "traddr": "10.0.0.2", 00:39:36.109 "adrfam": "ipv4", 00:39:36.109 "trsvcid": "4420", 00:39:36.109 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:36.109 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:36.109 "hdgst": false, 00:39:36.109 "ddgst": false 00:39:36.109 }, 00:39:36.109 "method": "bdev_nvme_attach_controller" 00:39:36.109 },{ 00:39:36.109 "params": { 00:39:36.109 "name": "Nvme1", 00:39:36.109 "trtype": "tcp", 00:39:36.109 "traddr": "10.0.0.2", 00:39:36.109 "adrfam": "ipv4", 00:39:36.109 "trsvcid": "4420", 00:39:36.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:36.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:36.109 "hdgst": false, 00:39:36.109 "ddgst": false 00:39:36.109 }, 00:39:36.109 "method": "bdev_nvme_attach_controller" 00:39:36.109 }' 00:39:36.109 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:36.109 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:36.109 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:36.109 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:36.109 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:39:36.109 20:31:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:36.109 20:31:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:36.109 20:31:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:36.109 20:31:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:36.109 20:31:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:36.109 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:36.109 ... 00:39:36.109 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:36.109 ... 00:39:36.109 fio-3.35 00:39:36.109 Starting 4 threads 00:39:36.109 EAL: No free 2048 kB hugepages reported on node 1 00:39:41.388 00:39:41.388 filename0: (groupid=0, jobs=1): err= 0: pid=351457: Wed May 15 20:31:33 2024 00:39:41.388 read: IOPS=2120, BW=16.6MiB/s (17.4MB/s)(82.9MiB/5003msec) 00:39:41.388 slat (nsec): min=8178, max=50113, avg=9348.55, stdev=3272.80 00:39:41.388 clat (usec): min=1904, max=6272, avg=3747.98, stdev=446.04 00:39:41.388 lat (usec): min=1916, max=6281, avg=3757.33, stdev=446.11 00:39:41.388 clat percentiles (usec): 00:39:41.388 | 1.00th=[ 2868], 5.00th=[ 3228], 10.00th=[ 3425], 20.00th=[ 3523], 00:39:41.388 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3752], 00:39:41.388 | 70.00th=[ 3785], 80.00th=[ 3785], 90.00th=[ 4080], 95.00th=[ 4686], 00:39:41.388 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 6063], 99.95th=[ 6063], 00:39:41.388 | 99.99th=[ 6194] 00:39:41.388 bw ( KiB/s): min=16160, max=17504, per=25.14%, avg=16915.56, stdev=420.19, samples=9 00:39:41.388 iops : min= 2020, max= 2188, avg=2114.44, stdev=52.52, samples=9 00:39:41.388 lat (msec) : 2=0.05%, 4=89.07%, 10=10.88% 00:39:41.388 cpu : usr=96.70%, sys=3.04%, ctx=7, majf=0, minf=88 00:39:41.388 IO depths : 1=0.1%, 2=0.4%, 4=69.0%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.388 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.388 issued rwts: total=10611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.388 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:41.388 filename0: (groupid=0, jobs=1): err= 0: pid=351458: Wed May 15 20:31:33 2024 00:39:41.388 read: IOPS=2116, BW=16.5MiB/s (17.3MB/s)(82.7MiB/5001msec) 00:39:41.388 slat (nsec): min=8180, max=38749, avg=9234.10, stdev=2974.13 00:39:41.388 clat (usec): min=1947, max=6200, avg=3754.75, stdev=519.53 00:39:41.388 lat (usec): min=1955, max=6208, avg=3763.98, stdev=519.42 00:39:41.388 clat percentiles (usec): 00:39:41.388 | 1.00th=[ 2638], 5.00th=[ 3097], 10.00th=[ 3392], 20.00th=[ 3523], 00:39:41.388 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3752], 00:39:41.388 | 70.00th=[ 3752], 80.00th=[ 3785], 90.00th=[ 4228], 95.00th=[ 5407], 00:39:41.388 | 99.00th=[ 5473], 99.50th=[ 5669], 99.90th=[ 5735], 99.95th=[ 5800], 00:39:41.388 | 99.99th=[ 6194] 00:39:41.388 bw ( KiB/s): min=16016, max=17440, per=25.31%, avg=17025.78, stdev=490.38, samples=9 00:39:41.388 iops : min= 2002, max= 2180, avg=2128.22, stdev=61.30, samples=9 00:39:41.388 lat (msec) : 2=0.02%, 4=88.32%, 10=11.66% 00:39:41.388 cpu : usr=95.52%, sys=3.46%, ctx=254, majf=0, minf=75 00:39:41.388 IO depths : 1=0.1%, 2=0.7%, 4=72.5%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.389 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.389 issued rwts: total=10583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.389 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:41.389 filename1: (groupid=0, jobs=1): err= 0: pid=351459: Wed May 15 20:31:33 2024 00:39:41.389 read: IOPS=2063, BW=16.1MiB/s (16.9MB/s)(80.6MiB/5002msec) 00:39:41.389 slat (nsec): min=8086, max=38286, avg=9216.68, stdev=3107.75 00:39:41.389 clat (usec): min=1619, max=7376, avg=3851.94, stdev=580.49 00:39:41.389 lat (usec): min=1635, max=7399, avg=3861.16, stdev=580.41 00:39:41.389 clat percentiles (usec): 00:39:41.389 | 1.00th=[ 2868], 5.00th=[ 3326], 10.00th=[ 3458], 20.00th=[ 3556], 00:39:41.389 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3752], 00:39:41.389 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4752], 95.00th=[ 5407], 00:39:41.389 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 6063], 99.95th=[ 6325], 00:39:41.389 | 99.99th=[ 7308] 00:39:41.389 bw ( KiB/s): min=16112, max=17152, per=24.55%, avg=16517.33, stdev=394.60, samples=9 00:39:41.389 iops : min= 2014, max= 2144, avg=2064.67, stdev=49.33, samples=9 00:39:41.389 lat (msec) : 2=0.05%, 4=83.77%, 10=16.18% 00:39:41.389 cpu : usr=97.04%, sys=2.68%, ctx=6, majf=0, minf=66 00:39:41.389 IO depths : 1=0.1%, 2=0.3%, 4=72.9%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.389 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.389 issued rwts: total=10321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.389 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:41.389 filename1: (groupid=0, jobs=1): err= 0: pid=351460: Wed May 15 20:31:33 2024 00:39:41.389 read: IOPS=2109, BW=16.5MiB/s (17.3MB/s)(82.5MiB/5003msec) 00:39:41.389 slat (nsec): min=8177, max=37631, avg=9119.98, stdev=2812.98 00:39:41.389 clat (usec): min=1977, max=45380, avg=3768.90, stdev=1225.82 00:39:41.389 lat (usec): min=1999, max=45415, avg=3778.02, stdev=1226.02 00:39:41.389 clat percentiles (usec): 00:39:41.389 | 1.00th=[ 2835], 5.00th=[ 3294], 10.00th=[ 3425], 20.00th=[ 3523], 00:39:41.389 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3752], 00:39:41.389 | 70.00th=[ 3785], 80.00th=[ 3785], 90.00th=[ 3949], 95.00th=[ 4686], 00:39:41.389 | 99.00th=[ 5604], 99.50th=[ 5669], 99.90th=[ 6390], 99.95th=[45351], 00:39:41.389 | 99.99th=[45351] 00:39:41.389 bw ( KiB/s): min=15535, max=17456, per=24.99%, avg=16814.11, stdev=639.07, samples=9 00:39:41.389 iops : min= 1941, max= 2182, avg=2101.67, stdev=80.10, samples=9 00:39:41.389 lat (msec) : 2=0.02%, 4=90.91%, 10=8.99%, 50=0.08% 00:39:41.389 cpu : usr=97.14%, sys=2.58%, ctx=8, majf=0, minf=122 00:39:41.389 IO depths : 1=0.2%, 2=0.4%, 4=68.2%, 8=31.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.389 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.389 issued rwts: total=10555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.389 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:41.389 00:39:41.389 Run status group 0 (all jobs): 00:39:41.389 READ: bw=65.7MiB/s (68.9MB/s), 16.1MiB/s-16.6MiB/s (16.9MB/s-17.4MB/s), io=329MiB (345MB), run=5001-5003msec 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:41.389 00:39:41.389 real 0m24.312s 00:39:41.389 user 5m21.492s 00:39:41.389 sys 0m4.440s 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:41.389 20:31:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:41.389 ************************************ 00:39:41.389 END TEST fio_dif_rand_params 00:39:41.389 ************************************ 00:39:41.389 20:31:33 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:39:41.389 20:31:33 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:41.389 20:31:33 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:41.389 20:31:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:41.389 ************************************ 00:39:41.389 START TEST fio_dif_digest 00:39:41.389 ************************************ 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:41.389 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:41.389 bdev_null0 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:41.390 [2024-05-15 20:31:33.506702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:41.390 { 00:39:41.390 "params": { 00:39:41.390 "name": "Nvme$subsystem", 00:39:41.390 "trtype": "$TEST_TRANSPORT", 00:39:41.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:41.390 "adrfam": "ipv4", 00:39:41.390 "trsvcid": "$NVMF_PORT", 00:39:41.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:41.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:41.390 "hdgst": ${hdgst:-false}, 00:39:41.390 "ddgst": ${ddgst:-false} 00:39:41.390 }, 00:39:41.390 "method": "bdev_nvme_attach_controller" 00:39:41.390 } 00:39:41.390 EOF 00:39:41.390 )") 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:41.390 "params": { 00:39:41.390 "name": "Nvme0", 00:39:41.390 "trtype": "tcp", 00:39:41.390 "traddr": "10.0.0.2", 00:39:41.390 "adrfam": "ipv4", 00:39:41.390 "trsvcid": "4420", 00:39:41.390 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:41.390 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:41.390 "hdgst": true, 00:39:41.390 "ddgst": true 00:39:41.390 }, 00:39:41.390 "method": "bdev_nvme_attach_controller" 00:39:41.390 }' 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:41.390 20:31:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:41.649 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:41.649 ... 00:39:41.649 fio-3.35 00:39:41.649 Starting 3 threads 00:39:41.649 EAL: No free 2048 kB hugepages reported on node 1 00:39:53.879 00:39:53.879 filename0: (groupid=0, jobs=1): err= 0: pid=352748: Wed May 15 20:31:44 2024 00:39:53.879 read: IOPS=157, BW=19.7MiB/s (20.6MB/s)(198MiB/10040msec) 00:39:53.879 slat (usec): min=8, max=113, avg=10.24, stdev= 3.11 00:39:53.879 clat (usec): min=12128, max=99111, avg=19046.38, stdev=11287.81 00:39:53.879 lat (usec): min=12137, max=99120, avg=19056.63, stdev=11287.78 00:39:53.879 clat percentiles (usec): 00:39:53.879 | 1.00th=[13304], 5.00th=[13960], 10.00th=[14484], 20.00th=[15139], 00:39:53.879 | 30.00th=[15533], 40.00th=[15926], 50.00th=[16188], 60.00th=[16581], 00:39:53.879 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18482], 95.00th=[56361], 00:39:53.879 | 99.00th=[58983], 99.50th=[59507], 99.90th=[98042], 99.95th=[99091], 00:39:53.879 | 99.99th=[99091] 00:39:53.879 bw ( KiB/s): min=15872, max=24320, per=24.02%, avg=20185.60, stdev=2662.08, samples=20 00:39:53.879 iops : min= 124, max= 190, avg=157.70, stdev=20.80, samples=20 00:39:53.879 lat (msec) : 20=92.97%, 50=0.25%, 100=6.77% 00:39:53.879 cpu : usr=95.03%, sys=4.16%, ctx=36, majf=0, minf=97 00:39:53.879 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:53.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.879 issued rwts: total=1580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.879 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:53.879 filename0: (groupid=0, jobs=1): err= 0: pid=352749: Wed May 15 20:31:44 2024 00:39:53.879 read: IOPS=251, BW=31.4MiB/s (32.9MB/s)(315MiB/10046msec) 00:39:53.879 slat (nsec): min=8468, max=33073, avg=9318.67, stdev=1140.38 00:39:53.879 clat (usec): min=7742, max=49662, avg=11921.67, stdev=1839.52 00:39:53.879 lat (usec): min=7751, max=49671, avg=11930.99, stdev=1839.57 00:39:53.879 clat percentiles (usec): 00:39:53.879 | 1.00th=[ 8291], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[10683], 00:39:53.879 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12256], 60.00th=[12518], 00:39:53.879 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13566], 95.00th=[13960], 00:39:53.879 | 99.00th=[14484], 99.50th=[15008], 99.90th=[20055], 99.95th=[45876], 00:39:53.879 | 99.99th=[49546] 00:39:53.879 bw ( KiB/s): min=30208, max=34304, per=38.37%, avg=32243.20, stdev=1266.39, samples=20 00:39:53.879 iops : min= 236, max= 268, avg=252.00, stdev= 9.90, samples=20 00:39:53.879 lat (msec) : 10=16.26%, 20=83.62%, 50=0.12% 00:39:53.879 cpu : usr=95.88%, sys=3.84%, ctx=15, majf=0, minf=136 00:39:53.879 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:53.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.879 issued rwts: total=2522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.879 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:53.879 filename0: (groupid=0, jobs=1): err= 0: pid=352751: Wed May 15 20:31:44 2024 00:39:53.879 read: IOPS=248, BW=31.0MiB/s (32.5MB/s)(312MiB/10045msec) 00:39:53.879 slat (usec): min=8, max=102, avg=11.32, stdev= 2.57 00:39:53.879 clat (usec): min=7575, max=49031, avg=12039.27, stdev=1646.81 00:39:53.879 lat (usec): min=7587, max=49043, avg=12050.59, stdev=1646.91 00:39:53.879 clat percentiles (usec): 00:39:53.879 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10814], 00:39:53.879 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12256], 60.00th=[12649], 00:39:53.879 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[13960], 00:39:53.879 | 99.00th=[14746], 99.50th=[15008], 99.90th=[17171], 99.95th=[19006], 00:39:53.879 | 99.99th=[49021] 00:39:53.879 bw ( KiB/s): min=29184, max=34560, per=37.96%, avg=31897.60, stdev=1373.34, samples=20 00:39:53.879 iops : min= 228, max= 270, avg=249.20, stdev=10.73, samples=20 00:39:53.879 lat (msec) : 10=14.16%, 20=85.80%, 50=0.04% 00:39:53.879 cpu : usr=94.74%, sys=4.77%, ctx=15, majf=0, minf=166 00:39:53.879 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:53.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.879 issued rwts: total=2493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.879 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:53.879 00:39:53.879 Run status group 0 (all jobs): 00:39:53.879 READ: bw=82.1MiB/s (86.0MB/s), 19.7MiB/s-31.4MiB/s (20.6MB/s-32.9MB/s), io=824MiB (864MB), run=10040-10046msec 00:39:53.879 20:31:44 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:53.879 20:31:44 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:53.879 20:31:44 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:53.879 20:31:44 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:53.879 20:31:44 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:53.879 20:31:44 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:53.879 20:31:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.879 20:31:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:53.879 20:31:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.879 20:31:44 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:53.879 20:31:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:53.879 20:31:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:53.879 20:31:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:53.879 00:39:53.879 real 0m11.175s 00:39:53.879 user 0m40.060s 00:39:53.879 sys 0m1.572s 00:39:53.879 20:31:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:53.879 20:31:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:53.879 ************************************ 00:39:53.879 END TEST fio_dif_digest 00:39:53.879 ************************************ 00:39:53.879 20:31:44 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:53.879 20:31:44 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:53.879 20:31:44 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:53.879 20:31:44 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:39:53.879 20:31:44 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:53.879 20:31:44 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:39:53.879 20:31:44 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:53.879 20:31:44 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:53.879 rmmod nvme_tcp 00:39:53.879 rmmod nvme_fabrics 00:39:53.879 rmmod nvme_keyring 00:39:53.879 20:31:44 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:53.879 20:31:44 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:39:53.879 20:31:44 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:39:53.879 20:31:44 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 342547 ']' 00:39:53.879 20:31:44 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 342547 00:39:53.879 20:31:44 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 342547 ']' 00:39:53.879 20:31:44 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 342547 00:39:53.879 20:31:44 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:39:53.879 20:31:44 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:53.879 20:31:44 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 342547 00:39:53.879 20:31:44 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:53.879 20:31:44 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:53.879 20:31:44 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 342547' 00:39:53.879 killing process with pid 342547 00:39:53.879 20:31:44 nvmf_dif -- common/autotest_common.sh@965 -- # kill 342547 00:39:53.880 [2024-05-15 20:31:44.803951] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:39:53.880 20:31:44 nvmf_dif -- common/autotest_common.sh@970 -- # wait 342547 00:39:53.880 20:31:44 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:39:53.880 20:31:44 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:56.424 Waiting for block devices as requested 00:39:56.424 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:56.424 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:56.685 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:56.685 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:56.685 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:56.685 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:56.945 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:56.945 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:56.945 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:57.205 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:57.205 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:57.501 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:57.501 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:57.501 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:57.501 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:57.809 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:57.809 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:58.071 20:31:50 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:58.071 20:31:50 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:58.071 20:31:50 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:58.071 20:31:50 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:58.071 20:31:50 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:58.071 20:31:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:58.071 20:31:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:59.983 20:31:52 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:59.983 00:39:59.983 real 1m18.791s 00:39:59.983 user 7m59.393s 00:39:59.983 sys 0m21.302s 00:39:59.983 20:31:52 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:59.983 20:31:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:59.983 ************************************ 00:39:59.983 END TEST nvmf_dif 00:39:59.983 ************************************ 00:40:00.244 20:31:52 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:00.244 20:31:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:00.244 20:31:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:00.244 20:31:52 -- common/autotest_common.sh@10 -- # set +x 00:40:00.244 ************************************ 00:40:00.244 START TEST nvmf_abort_qd_sizes 00:40:00.244 ************************************ 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:00.244 * Looking for test storage... 00:40:00.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:40:00.244 20:31:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:08.413 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:08.413 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:08.414 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:08.414 Found net devices under 0000:31:00.0: cvl_0_0 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:08.414 Found net devices under 0000:31:00.1: cvl_0_1 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:08.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:08.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.835 ms 00:40:08.414 00:40:08.414 --- 10.0.0.2 ping statistics --- 00:40:08.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:08.414 rtt min/avg/max/mdev = 0.835/0.835/0.835/0.000 ms 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:08.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:08.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:40:08.414 00:40:08.414 --- 10.0.0.1 ping statistics --- 00:40:08.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:08.414 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:40:08.414 20:32:00 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:12.618 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:12.618 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=363012 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 363012 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 363012 ']' 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:12.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:12.878 20:32:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:12.878 [2024-05-15 20:32:05.367248] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:40:12.878 [2024-05-15 20:32:05.367308] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:13.139 EAL: No free 2048 kB hugepages reported on node 1 00:40:13.139 [2024-05-15 20:32:05.461241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:13.139 [2024-05-15 20:32:05.560952] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:13.139 [2024-05-15 20:32:05.561010] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:13.139 [2024-05-15 20:32:05.561019] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:13.139 [2024-05-15 20:32:05.561026] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:13.139 [2024-05-15 20:32:05.561033] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:13.139 [2024-05-15 20:32:05.561162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:13.139 [2024-05-15 20:32:05.561309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:13.139 [2024-05-15 20:32:05.561454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:13.139 [2024-05-15 20:32:05.561618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:14.081 20:32:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:14.081 ************************************ 00:40:14.081 START TEST spdk_target_abort 00:40:14.081 ************************************ 00:40:14.081 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:40:14.081 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:40:14.081 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:40:14.081 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:14.081 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:14.342 spdk_targetn1 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:14.342 [2024-05-15 20:32:06.670161] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:14.342 [2024-05-15 20:32:06.710205] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:40:14.342 [2024-05-15 20:32:06.710440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:14.342 20:32:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:14.342 EAL: No free 2048 kB hugepages reported on node 1 00:40:14.603 [2024-05-15 20:32:06.874604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:296 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:40:14.603 [2024-05-15 20:32:06.874627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0027 p:1 m:0 dnr:0 00:40:14.604 [2024-05-15 20:32:06.875369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:344 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:40:14.604 [2024-05-15 20:32:06.875381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:40:14.604 [2024-05-15 20:32:06.881832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:480 len:8 PRP1 0x2000078be000 PRP2 0x0 00:40:14.604 [2024-05-15 20:32:06.881845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:003f p:1 m:0 dnr:0 00:40:14.604 [2024-05-15 20:32:06.903088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1320 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:40:14.604 [2024-05-15 20:32:06.903104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00a6 p:1 m:0 dnr:0 00:40:14.604 [2024-05-15 20:32:06.932084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2128 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:40:14.604 [2024-05-15 20:32:06.932100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:40:14.604 [2024-05-15 20:32:06.947789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2616 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:40:14.604 [2024-05-15 20:32:06.947804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:40:14.604 [2024-05-15 20:32:06.971775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3416 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:40:14.604 [2024-05-15 20:32:06.971791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00ac p:0 m:0 dnr:0 00:40:14.604 [2024-05-15 20:32:06.979798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3664 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:40:14.604 [2024-05-15 20:32:06.979812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00cc p:0 m:0 dnr:0 00:40:14.604 [2024-05-15 20:32:06.993790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:4040 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:40:14.604 [2024-05-15 20:32:06.993805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00fd p:0 m:0 dnr:0 00:40:14.604 [2024-05-15 20:32:07.002480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:4368 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:40:14.604 [2024-05-15 20:32:07.002494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0025 p:1 m:0 dnr:0 00:40:14.604 [2024-05-15 20:32:07.017813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:4824 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:40:14.604 [2024-05-15 20:32:07.017828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:005e p:1 m:0 dnr:0 00:40:17.905 Initializing NVMe Controllers 00:40:17.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:17.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:17.905 Initialization complete. Launching workers. 00:40:17.905 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11192, failed: 11 00:40:17.905 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3378, failed to submit 7825 00:40:17.905 success 710, unsuccess 2668, failed 0 00:40:17.905 20:32:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:17.905 20:32:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:17.905 EAL: No free 2048 kB hugepages reported on node 1 00:40:17.905 [2024-05-15 20:32:10.073499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:672 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:40:17.905 [2024-05-15 20:32:10.073542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:40:17.905 [2024-05-15 20:32:10.081389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:776 len:8 PRP1 0x200007c46000 PRP2 0x0 00:40:17.905 [2024-05-15 20:32:10.081411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:006c p:1 m:0 dnr:0 00:40:17.905 [2024-05-15 20:32:10.097459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:1152 len:8 PRP1 0x200007c44000 PRP2 0x0 00:40:17.905 [2024-05-15 20:32:10.097481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:009c p:1 m:0 dnr:0 00:40:17.905 [2024-05-15 20:32:10.129451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:2008 len:8 PRP1 0x200007c46000 PRP2 0x0 00:40:17.905 [2024-05-15 20:32:10.129474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:40:17.905 [2024-05-15 20:32:10.153437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:2552 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:40:17.905 [2024-05-15 20:32:10.153461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:40:17.905 [2024-05-15 20:32:10.225488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:4184 len:8 PRP1 0x200007c50000 PRP2 0x0 00:40:17.905 [2024-05-15 20:32:10.225511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:40:17.905 [2024-05-15 20:32:10.233450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:4424 len:8 PRP1 0x200007c46000 PRP2 0x0 00:40:17.905 [2024-05-15 20:32:10.233471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:002f p:1 m:0 dnr:0 00:40:17.905 [2024-05-15 20:32:10.241406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:4560 len:8 PRP1 0x200007c44000 PRP2 0x0 00:40:17.905 [2024-05-15 20:32:10.241426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:0044 p:1 m:0 dnr:0 00:40:21.207 Initializing NVMe Controllers 00:40:21.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:21.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:21.207 Initialization complete. Launching workers. 00:40:21.207 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8733, failed: 8 00:40:21.207 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1246, failed to submit 7495 00:40:21.207 success 331, unsuccess 915, failed 0 00:40:21.207 20:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:21.207 20:32:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:21.207 EAL: No free 2048 kB hugepages reported on node 1 00:40:21.207 [2024-05-15 20:32:13.399061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:170 nsid:1 lba:4296 len:8 PRP1 0x2000078fc000 PRP2 0x0 00:40:21.207 [2024-05-15 20:32:13.399095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:170 cdw0:0 sqhd:00ef p:0 m:0 dnr:0 00:40:24.504 Initializing NVMe Controllers 00:40:24.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:24.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:24.504 Initialization complete. Launching workers. 00:40:24.504 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40731, failed: 1 00:40:24.504 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2526, failed to submit 38206 00:40:24.504 success 591, unsuccess 1935, failed 0 00:40:24.504 20:32:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:40:24.504 20:32:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.504 20:32:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:24.504 20:32:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:24.504 20:32:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:40:24.504 20:32:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:24.504 20:32:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:25.891 20:32:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:25.891 20:32:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 363012 00:40:25.891 20:32:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 363012 ']' 00:40:25.891 20:32:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 363012 00:40:25.891 20:32:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:40:25.891 20:32:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:25.891 20:32:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 363012 00:40:25.891 20:32:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:25.891 20:32:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:25.891 20:32:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 363012' 00:40:25.891 killing process with pid 363012 00:40:25.891 20:32:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 363012 00:40:25.891 [2024-05-15 20:32:18.301495] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:40:25.891 20:32:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 363012 00:40:26.152 00:40:26.152 real 0m12.076s 00:40:26.152 user 0m49.298s 00:40:26.152 sys 0m1.986s 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:26.152 ************************************ 00:40:26.152 END TEST spdk_target_abort 00:40:26.152 ************************************ 00:40:26.152 20:32:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:40:26.152 20:32:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:26.152 20:32:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:26.152 20:32:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:26.152 ************************************ 00:40:26.152 START TEST kernel_target_abort 00:40:26.152 ************************************ 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:26.152 20:32:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:30.357 Waiting for block devices as requested 00:40:30.357 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:30.357 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:30.357 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:30.357 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:30.357 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:30.357 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:30.357 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:30.617 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:30.617 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:30.877 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:30.877 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:30.877 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:31.138 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:31.138 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:31.138 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:31.399 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:31.399 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:40:31.660 No valid GPT data, bailing 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:40:31.660 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:40:31.921 00:40:31.921 Discovery Log Number of Records 2, Generation counter 2 00:40:31.921 =====Discovery Log Entry 0====== 00:40:31.921 trtype: tcp 00:40:31.921 adrfam: ipv4 00:40:31.921 subtype: current discovery subsystem 00:40:31.921 treq: not specified, sq flow control disable supported 00:40:31.921 portid: 1 00:40:31.921 trsvcid: 4420 00:40:31.921 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:31.921 traddr: 10.0.0.1 00:40:31.921 eflags: none 00:40:31.921 sectype: none 00:40:31.921 =====Discovery Log Entry 1====== 00:40:31.921 trtype: tcp 00:40:31.921 adrfam: ipv4 00:40:31.921 subtype: nvme subsystem 00:40:31.921 treq: not specified, sq flow control disable supported 00:40:31.921 portid: 1 00:40:31.921 trsvcid: 4420 00:40:31.921 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:31.921 traddr: 10.0.0.1 00:40:31.921 eflags: none 00:40:31.921 sectype: none 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:31.921 20:32:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:31.921 EAL: No free 2048 kB hugepages reported on node 1 00:40:35.222 Initializing NVMe Controllers 00:40:35.222 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:35.222 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:35.222 Initialization complete. Launching workers. 00:40:35.222 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 51927, failed: 0 00:40:35.222 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 51927, failed to submit 0 00:40:35.222 success 0, unsuccess 51927, failed 0 00:40:35.222 20:32:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:35.222 20:32:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:35.222 EAL: No free 2048 kB hugepages reported on node 1 00:40:38.522 Initializing NVMe Controllers 00:40:38.522 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:38.522 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:38.522 Initialization complete. Launching workers. 00:40:38.522 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93274, failed: 0 00:40:38.522 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23518, failed to submit 69756 00:40:38.522 success 0, unsuccess 23518, failed 0 00:40:38.522 20:32:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:38.522 20:32:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:38.522 EAL: No free 2048 kB hugepages reported on node 1 00:40:41.064 Initializing NVMe Controllers 00:40:41.064 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:41.064 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:41.064 Initialization complete. Launching workers. 00:40:41.064 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90400, failed: 0 00:40:41.064 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22586, failed to submit 67814 00:40:41.064 success 0, unsuccess 22586, failed 0 00:40:41.325 20:32:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:40:41.325 20:32:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:41.325 20:32:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:40:41.325 20:32:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:41.325 20:32:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:41.325 20:32:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:41.325 20:32:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:41.325 20:32:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:40:41.325 20:32:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:40:41.325 20:32:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:45.527 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:45.528 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:45.528 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:45.528 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:45.528 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:45.528 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:45.528 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:45.528 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:45.528 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:45.528 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:45.528 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:45.528 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:45.528 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:45.528 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:45.528 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:45.528 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:47.014 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:40:47.274 00:40:47.274 real 0m21.058s 00:40:47.274 user 0m8.771s 00:40:47.274 sys 0m6.983s 00:40:47.274 20:32:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:47.274 20:32:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:47.274 ************************************ 00:40:47.274 END TEST kernel_target_abort 00:40:47.274 ************************************ 00:40:47.274 20:32:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:40:47.274 20:32:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:40:47.274 20:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:47.274 20:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:40:47.274 20:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:47.274 20:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:40:47.275 20:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:47.275 20:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:47.275 rmmod nvme_tcp 00:40:47.275 rmmod nvme_fabrics 00:40:47.275 rmmod nvme_keyring 00:40:47.275 20:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:47.275 20:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:40:47.275 20:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:40:47.275 20:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 363012 ']' 00:40:47.275 20:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 363012 00:40:47.275 20:32:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 363012 ']' 00:40:47.275 20:32:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 363012 00:40:47.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (363012) - No such process 00:40:47.275 20:32:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 363012 is not found' 00:40:47.275 Process with pid 363012 is not found 00:40:47.275 20:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:40:47.275 20:32:39 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:51.487 Waiting for block devices as requested 00:40:51.487 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:51.487 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:51.487 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:51.488 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:51.488 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:51.749 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:51.749 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:51.749 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:52.010 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:52.010 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:52.272 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:52.272 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:52.272 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:52.532 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:52.532 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:52.532 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:52.793 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:53.054 20:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:53.054 20:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:53.054 20:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:53.054 20:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:53.054 20:32:45 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:53.054 20:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:53.054 20:32:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:54.966 20:32:47 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:54.966 00:40:54.966 real 0m54.861s 00:40:54.966 user 1m4.029s 00:40:54.966 sys 0m21.280s 00:40:54.966 20:32:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:54.966 20:32:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:54.966 ************************************ 00:40:54.966 END TEST nvmf_abort_qd_sizes 00:40:54.966 ************************************ 00:40:54.966 20:32:47 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:54.967 20:32:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:54.967 20:32:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:54.967 20:32:47 -- common/autotest_common.sh@10 -- # set +x 00:40:55.227 ************************************ 00:40:55.227 START TEST keyring_file 00:40:55.227 ************************************ 00:40:55.227 20:32:47 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:55.227 * Looking for test storage... 00:40:55.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:55.227 20:32:47 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:55.227 20:32:47 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:55.228 20:32:47 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:55.228 20:32:47 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:55.228 20:32:47 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:55.228 20:32:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.228 20:32:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.228 20:32:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.228 20:32:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:55.228 20:32:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@47 -- # : 0 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:55.228 20:32:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:55.228 20:32:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:55.228 20:32:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:55.228 20:32:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:55.228 20:32:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:55.228 20:32:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xZH0Dui5tI 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xZH0Dui5tI 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xZH0Dui5tI 00:40:55.228 20:32:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.xZH0Dui5tI 00:40:55.228 20:32:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.F0sAuudZA8 00:40:55.228 20:32:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:40:55.228 20:32:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:40:55.490 20:32:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.F0sAuudZA8 00:40:55.490 20:32:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.F0sAuudZA8 00:40:55.490 20:32:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.F0sAuudZA8 00:40:55.490 20:32:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=373896 00:40:55.490 20:32:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 373896 00:40:55.490 20:32:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:55.490 20:32:47 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 373896 ']' 00:40:55.490 20:32:47 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:55.490 20:32:47 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:55.490 20:32:47 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:55.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:55.490 20:32:47 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:55.490 20:32:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:55.490 [2024-05-15 20:32:47.822677] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:40:55.490 [2024-05-15 20:32:47.822748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373896 ] 00:40:55.490 EAL: No free 2048 kB hugepages reported on node 1 00:40:55.490 [2024-05-15 20:32:47.909526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:55.751 [2024-05-15 20:32:48.004879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:56.323 20:32:48 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:56.323 20:32:48 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:40:56.323 20:32:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:56.323 20:32:48 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:56.323 20:32:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:56.323 [2024-05-15 20:32:48.702303] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:56.323 null0 00:40:56.323 [2024-05-15 20:32:48.734330] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:40:56.323 [2024-05-15 20:32:48.734395] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:56.323 [2024-05-15 20:32:48.735072] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:56.323 [2024-05-15 20:32:48.742375] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:40:56.323 20:32:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:56.324 20:32:48 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:56.324 [2024-05-15 20:32:48.758405] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:56.324 request: 00:40:56.324 { 00:40:56.324 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:56.324 "secure_channel": false, 00:40:56.324 "listen_address": { 00:40:56.324 "trtype": "tcp", 00:40:56.324 "traddr": "127.0.0.1", 00:40:56.324 "trsvcid": "4420" 00:40:56.324 }, 00:40:56.324 "method": "nvmf_subsystem_add_listener", 00:40:56.324 "req_id": 1 00:40:56.324 } 00:40:56.324 Got JSON-RPC error response 00:40:56.324 response: 00:40:56.324 { 00:40:56.324 "code": -32602, 00:40:56.324 "message": "Invalid parameters" 00:40:56.324 } 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:56.324 20:32:48 keyring_file -- keyring/file.sh@46 -- # bperfpid=373969 00:40:56.324 20:32:48 keyring_file -- keyring/file.sh@48 -- # waitforlisten 373969 /var/tmp/bperf.sock 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 373969 ']' 00:40:56.324 20:32:48 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:56.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:56.324 20:32:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:56.324 [2024-05-15 20:32:48.816418] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:40:56.324 [2024-05-15 20:32:48.816479] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373969 ] 00:40:56.584 EAL: No free 2048 kB hugepages reported on node 1 00:40:56.584 [2024-05-15 20:32:48.886395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:56.584 [2024-05-15 20:32:48.959395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:56.584 20:32:49 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:56.584 20:32:49 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:40:56.584 20:32:49 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xZH0Dui5tI 00:40:56.584 20:32:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xZH0Dui5tI 00:40:56.843 20:32:49 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.F0sAuudZA8 00:40:56.843 20:32:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.F0sAuudZA8 00:40:57.103 20:32:49 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:40:57.103 20:32:49 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:40:57.103 20:32:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:57.103 20:32:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:57.103 20:32:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:57.363 20:32:49 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.xZH0Dui5tI == \/\t\m\p\/\t\m\p\.\x\Z\H\0\D\u\i\5\t\I ]] 00:40:57.363 20:32:49 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:40:57.363 20:32:49 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:57.363 20:32:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:57.363 20:32:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:57.363 20:32:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:57.624 20:32:49 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.F0sAuudZA8 == \/\t\m\p\/\t\m\p\.\F\0\s\A\u\u\d\Z\A\8 ]] 00:40:57.624 20:32:49 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:40:57.624 20:32:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:57.624 20:32:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:57.624 20:32:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:57.624 20:32:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:57.624 20:32:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:57.624 20:32:50 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:40:57.624 20:32:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:40:57.624 20:32:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:57.624 20:32:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:57.624 20:32:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:57.624 20:32:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:57.624 20:32:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:57.885 20:32:50 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:57.885 20:32:50 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:57.885 20:32:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:58.145 [2024-05-15 20:32:50.479911] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:58.145 nvme0n1 00:40:58.145 20:32:50 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:40:58.145 20:32:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:58.145 20:32:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:58.145 20:32:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:58.145 20:32:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:58.145 20:32:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:58.406 20:32:50 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:40:58.406 20:32:50 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:40:58.406 20:32:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:58.406 20:32:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:58.406 20:32:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:58.406 20:32:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:58.406 20:32:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:58.666 20:32:51 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:40:58.666 20:32:51 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:58.666 Running I/O for 1 seconds... 00:41:00.048 00:41:00.048 Latency(us) 00:41:00.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:00.048 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:41:00.048 nvme0n1 : 1.01 7980.42 31.17 0.00 0.00 15931.91 9775.79 22282.24 00:41:00.048 =================================================================================================================== 00:41:00.048 Total : 7980.42 31.17 0.00 0.00 15931.91 9775.79 22282.24 00:41:00.048 0 00:41:00.048 20:32:52 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:00.048 20:32:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:00.048 20:32:52 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:41:00.048 20:32:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:00.048 20:32:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:00.048 20:32:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:00.048 20:32:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:00.048 20:32:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:00.309 20:32:52 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:41:00.309 20:32:52 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:41:00.309 20:32:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:00.309 20:32:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:00.309 20:32:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:00.309 20:32:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:00.309 20:32:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:00.309 20:32:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:41:00.309 20:32:52 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:00.309 20:32:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:41:00.309 20:32:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:00.309 20:32:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:41:00.309 20:32:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:00.309 20:32:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:41:00.309 20:32:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:00.309 20:32:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:00.309 20:32:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:00.570 [2024-05-15 20:32:52.992591] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:00.570 [2024-05-15 20:32:52.993192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6e630 (107): Transport endpoint is not connected 00:41:00.570 [2024-05-15 20:32:52.994186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6e630 (9): Bad file descriptor 00:41:00.570 [2024-05-15 20:32:52.995187] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:00.570 [2024-05-15 20:32:52.995198] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:41:00.570 [2024-05-15 20:32:52.995205] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:00.570 request: 00:41:00.570 { 00:41:00.570 "name": "nvme0", 00:41:00.570 "trtype": "tcp", 00:41:00.570 "traddr": "127.0.0.1", 00:41:00.570 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:00.570 "adrfam": "ipv4", 00:41:00.570 "trsvcid": "4420", 00:41:00.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:00.570 "psk": "key1", 00:41:00.570 "method": "bdev_nvme_attach_controller", 00:41:00.570 "req_id": 1 00:41:00.570 } 00:41:00.570 Got JSON-RPC error response 00:41:00.570 response: 00:41:00.570 { 00:41:00.570 "code": -32602, 00:41:00.570 "message": "Invalid parameters" 00:41:00.570 } 00:41:00.570 20:32:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:41:00.570 20:32:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:00.570 20:32:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:00.570 20:32:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:00.570 20:32:53 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:41:00.570 20:32:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:00.570 20:32:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:00.570 20:32:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:00.570 20:32:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:00.570 20:32:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:00.830 20:32:53 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:41:00.830 20:32:53 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:41:00.830 20:32:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:00.830 20:32:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:00.830 20:32:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:00.830 20:32:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:00.830 20:32:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:01.090 20:32:53 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:41:01.090 20:32:53 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:41:01.090 20:32:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:01.350 20:32:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:41:01.350 20:32:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:41:01.350 20:32:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:41:01.350 20:32:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:01.350 20:32:53 keyring_file -- keyring/file.sh@77 -- # jq length 00:41:01.611 20:32:54 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:41:01.611 20:32:54 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.xZH0Dui5tI 00:41:01.611 20:32:54 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.xZH0Dui5tI 00:41:01.611 20:32:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:41:01.611 20:32:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.xZH0Dui5tI 00:41:01.611 20:32:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:41:01.611 20:32:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:01.611 20:32:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:41:01.611 20:32:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:01.611 20:32:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xZH0Dui5tI 00:41:01.611 20:32:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xZH0Dui5tI 00:41:01.871 [2024-05-15 20:32:54.230560] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.xZH0Dui5tI': 0100660 00:41:01.871 [2024-05-15 20:32:54.230581] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:41:01.871 request: 00:41:01.871 { 00:41:01.871 "name": "key0", 00:41:01.871 "path": "/tmp/tmp.xZH0Dui5tI", 00:41:01.871 "method": "keyring_file_add_key", 00:41:01.871 "req_id": 1 00:41:01.871 } 00:41:01.871 Got JSON-RPC error response 00:41:01.871 response: 00:41:01.871 { 00:41:01.871 "code": -1, 00:41:01.871 "message": "Operation not permitted" 00:41:01.871 } 00:41:01.871 20:32:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:41:01.871 20:32:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:01.871 20:32:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:01.871 20:32:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:01.871 20:32:54 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.xZH0Dui5tI 00:41:01.871 20:32:54 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xZH0Dui5tI 00:41:01.871 20:32:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xZH0Dui5tI 00:41:02.131 20:32:54 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.xZH0Dui5tI 00:41:02.131 20:32:54 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:41:02.131 20:32:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:02.131 20:32:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:02.131 20:32:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:02.131 20:32:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:02.131 20:32:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:02.392 20:32:54 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:41:02.392 20:32:54 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:02.392 20:32:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:41:02.392 20:32:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:02.392 20:32:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:41:02.392 20:32:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:02.392 20:32:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:41:02.392 20:32:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:02.392 20:32:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:02.392 20:32:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:02.392 [2024-05-15 20:32:54.876204] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.xZH0Dui5tI': No such file or directory 00:41:02.392 [2024-05-15 20:32:54.876220] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:41:02.392 [2024-05-15 20:32:54.876242] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:41:02.392 [2024-05-15 20:32:54.876249] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:41:02.392 [2024-05-15 20:32:54.876255] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:41:02.392 request: 00:41:02.392 { 00:41:02.392 "name": "nvme0", 00:41:02.392 "trtype": "tcp", 00:41:02.392 "traddr": "127.0.0.1", 00:41:02.392 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:02.392 "adrfam": "ipv4", 00:41:02.392 "trsvcid": "4420", 00:41:02.392 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:02.392 "psk": "key0", 00:41:02.392 "method": "bdev_nvme_attach_controller", 00:41:02.392 "req_id": 1 00:41:02.392 } 00:41:02.392 Got JSON-RPC error response 00:41:02.392 response: 00:41:02.392 { 00:41:02.392 "code": -19, 00:41:02.392 "message": "No such device" 00:41:02.392 } 00:41:02.392 20:32:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:41:02.392 20:32:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:02.392 20:32:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:41:02.392 20:32:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:02.392 20:32:54 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:41:02.392 20:32:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:02.680 20:32:55 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:02.680 20:32:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:02.680 20:32:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:02.680 20:32:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:02.680 20:32:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:02.680 20:32:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:02.680 20:32:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yrEGwCDBfw 00:41:02.680 20:32:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:02.680 20:32:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:02.680 20:32:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:41:02.680 20:32:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:02.680 20:32:55 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:41:02.680 20:32:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:41:02.680 20:32:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:41:02.680 20:32:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yrEGwCDBfw 00:41:02.680 20:32:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yrEGwCDBfw 00:41:02.680 20:32:55 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.yrEGwCDBfw 00:41:02.680 20:32:55 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yrEGwCDBfw 00:41:02.680 20:32:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yrEGwCDBfw 00:41:02.941 20:32:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:02.941 20:32:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:03.201 nvme0n1 00:41:03.201 20:32:55 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:41:03.201 20:32:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:03.201 20:32:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:03.201 20:32:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:03.201 20:32:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:03.201 20:32:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:03.462 20:32:55 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:41:03.462 20:32:55 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:41:03.462 20:32:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:03.722 20:32:56 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:41:03.722 20:32:56 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:41:03.722 20:32:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:03.722 20:32:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:03.722 20:32:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:03.983 20:32:56 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:41:03.983 20:32:56 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:41:03.983 20:32:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:03.983 20:32:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:03.983 20:32:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:03.984 20:32:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:03.984 20:32:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:03.984 20:32:56 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:41:03.984 20:32:56 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:03.984 20:32:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:04.245 20:32:56 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:41:04.245 20:32:56 keyring_file -- keyring/file.sh@104 -- # jq length 00:41:04.245 20:32:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:04.507 20:32:56 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:41:04.507 20:32:56 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yrEGwCDBfw 00:41:04.507 20:32:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yrEGwCDBfw 00:41:04.768 20:32:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.F0sAuudZA8 00:41:04.768 20:32:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.F0sAuudZA8 00:41:05.028 20:32:57 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:05.028 20:32:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:05.288 nvme0n1 00:41:05.288 20:32:57 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:41:05.288 20:32:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:41:05.550 20:32:57 keyring_file -- keyring/file.sh@112 -- # config='{ 00:41:05.550 "subsystems": [ 00:41:05.550 { 00:41:05.550 "subsystem": "keyring", 00:41:05.550 "config": [ 00:41:05.550 { 00:41:05.550 "method": "keyring_file_add_key", 00:41:05.550 "params": { 00:41:05.550 "name": "key0", 00:41:05.550 "path": "/tmp/tmp.yrEGwCDBfw" 00:41:05.550 } 00:41:05.550 }, 00:41:05.550 { 00:41:05.550 "method": "keyring_file_add_key", 00:41:05.550 "params": { 00:41:05.550 "name": "key1", 00:41:05.550 "path": "/tmp/tmp.F0sAuudZA8" 00:41:05.550 } 00:41:05.550 } 00:41:05.550 ] 00:41:05.550 }, 00:41:05.550 { 00:41:05.550 "subsystem": "iobuf", 00:41:05.550 "config": [ 00:41:05.550 { 00:41:05.550 "method": "iobuf_set_options", 00:41:05.550 "params": { 00:41:05.550 "small_pool_count": 8192, 00:41:05.550 "large_pool_count": 1024, 00:41:05.551 "small_bufsize": 8192, 00:41:05.551 "large_bufsize": 135168 00:41:05.551 } 00:41:05.551 } 00:41:05.551 ] 00:41:05.551 }, 00:41:05.551 { 00:41:05.551 "subsystem": "sock", 00:41:05.551 "config": [ 00:41:05.551 { 00:41:05.551 "method": "sock_impl_set_options", 00:41:05.551 "params": { 00:41:05.551 "impl_name": "posix", 00:41:05.551 "recv_buf_size": 2097152, 00:41:05.551 "send_buf_size": 2097152, 00:41:05.551 "enable_recv_pipe": true, 00:41:05.551 "enable_quickack": false, 00:41:05.551 "enable_placement_id": 0, 00:41:05.551 "enable_zerocopy_send_server": true, 00:41:05.551 "enable_zerocopy_send_client": false, 00:41:05.551 "zerocopy_threshold": 0, 00:41:05.551 "tls_version": 0, 00:41:05.551 "enable_ktls": false 00:41:05.551 } 00:41:05.551 }, 00:41:05.551 { 00:41:05.551 "method": "sock_impl_set_options", 00:41:05.551 "params": { 00:41:05.551 "impl_name": "ssl", 00:41:05.551 "recv_buf_size": 4096, 00:41:05.551 "send_buf_size": 4096, 00:41:05.551 "enable_recv_pipe": true, 00:41:05.551 "enable_quickack": false, 00:41:05.551 "enable_placement_id": 0, 00:41:05.551 "enable_zerocopy_send_server": true, 00:41:05.551 "enable_zerocopy_send_client": false, 00:41:05.551 "zerocopy_threshold": 0, 00:41:05.551 "tls_version": 0, 00:41:05.551 "enable_ktls": false 00:41:05.551 } 00:41:05.551 } 00:41:05.551 ] 00:41:05.551 }, 00:41:05.551 { 00:41:05.551 "subsystem": "vmd", 00:41:05.551 "config": [] 00:41:05.551 }, 00:41:05.551 { 00:41:05.551 "subsystem": "accel", 00:41:05.551 "config": [ 00:41:05.551 { 00:41:05.551 "method": "accel_set_options", 00:41:05.551 "params": { 00:41:05.551 "small_cache_size": 128, 00:41:05.551 "large_cache_size": 16, 00:41:05.551 "task_count": 2048, 00:41:05.551 "sequence_count": 2048, 00:41:05.551 "buf_count": 2048 00:41:05.551 } 00:41:05.551 } 00:41:05.551 ] 00:41:05.551 }, 00:41:05.551 { 00:41:05.551 "subsystem": "bdev", 00:41:05.551 "config": [ 00:41:05.551 { 00:41:05.551 "method": "bdev_set_options", 00:41:05.551 "params": { 00:41:05.551 "bdev_io_pool_size": 65535, 00:41:05.551 "bdev_io_cache_size": 256, 00:41:05.551 "bdev_auto_examine": true, 00:41:05.551 "iobuf_small_cache_size": 128, 00:41:05.551 "iobuf_large_cache_size": 16 00:41:05.551 } 00:41:05.551 }, 00:41:05.551 { 00:41:05.551 "method": "bdev_raid_set_options", 00:41:05.551 "params": { 00:41:05.551 "process_window_size_kb": 1024 00:41:05.551 } 00:41:05.551 }, 00:41:05.551 { 00:41:05.551 "method": "bdev_iscsi_set_options", 00:41:05.551 "params": { 00:41:05.551 "timeout_sec": 30 00:41:05.551 } 00:41:05.551 }, 00:41:05.551 { 00:41:05.551 "method": "bdev_nvme_set_options", 00:41:05.551 "params": { 00:41:05.551 "action_on_timeout": "none", 00:41:05.551 "timeout_us": 0, 00:41:05.551 "timeout_admin_us": 0, 00:41:05.551 "keep_alive_timeout_ms": 10000, 00:41:05.551 "arbitration_burst": 0, 00:41:05.551 "low_priority_weight": 0, 00:41:05.551 "medium_priority_weight": 0, 00:41:05.551 "high_priority_weight": 0, 00:41:05.551 "nvme_adminq_poll_period_us": 10000, 00:41:05.551 "nvme_ioq_poll_period_us": 0, 00:41:05.551 "io_queue_requests": 512, 00:41:05.551 "delay_cmd_submit": true, 00:41:05.551 "transport_retry_count": 4, 00:41:05.551 "bdev_retry_count": 3, 00:41:05.551 "transport_ack_timeout": 0, 00:41:05.551 "ctrlr_loss_timeout_sec": 0, 00:41:05.551 "reconnect_delay_sec": 0, 00:41:05.551 "fast_io_fail_timeout_sec": 0, 00:41:05.551 "disable_auto_failback": false, 00:41:05.551 "generate_uuids": false, 00:41:05.551 "transport_tos": 0, 00:41:05.551 "nvme_error_stat": false, 00:41:05.551 "rdma_srq_size": 0, 00:41:05.551 "io_path_stat": false, 00:41:05.551 "allow_accel_sequence": false, 00:41:05.551 "rdma_max_cq_size": 0, 00:41:05.551 "rdma_cm_event_timeout_ms": 0, 00:41:05.551 "dhchap_digests": [ 00:41:05.551 "sha256", 00:41:05.551 "sha384", 00:41:05.551 "sha512" 00:41:05.551 ], 00:41:05.551 "dhchap_dhgroups": [ 00:41:05.551 "null", 00:41:05.551 "ffdhe2048", 00:41:05.551 "ffdhe3072", 00:41:05.551 "ffdhe4096", 00:41:05.551 "ffdhe6144", 00:41:05.551 "ffdhe8192" 00:41:05.551 ] 00:41:05.551 } 00:41:05.551 }, 00:41:05.551 { 00:41:05.551 "method": "bdev_nvme_attach_controller", 00:41:05.551 "params": { 00:41:05.551 "name": "nvme0", 00:41:05.551 "trtype": "TCP", 00:41:05.551 "adrfam": "IPv4", 00:41:05.551 "traddr": "127.0.0.1", 00:41:05.551 "trsvcid": "4420", 00:41:05.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:05.551 "prchk_reftag": false, 00:41:05.551 "prchk_guard": false, 00:41:05.551 "ctrlr_loss_timeout_sec": 0, 00:41:05.551 "reconnect_delay_sec": 0, 00:41:05.551 "fast_io_fail_timeout_sec": 0, 00:41:05.551 "psk": "key0", 00:41:05.551 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:05.551 "hdgst": false, 00:41:05.551 "ddgst": false 00:41:05.551 } 00:41:05.551 }, 00:41:05.551 { 00:41:05.551 "method": "bdev_nvme_set_hotplug", 00:41:05.551 "params": { 00:41:05.551 "period_us": 100000, 00:41:05.551 "enable": false 00:41:05.551 } 00:41:05.551 }, 00:41:05.551 { 00:41:05.551 "method": "bdev_wait_for_examine" 00:41:05.551 } 00:41:05.551 ] 00:41:05.551 }, 00:41:05.551 { 00:41:05.551 "subsystem": "nbd", 00:41:05.551 "config": [] 00:41:05.551 } 00:41:05.551 ] 00:41:05.551 }' 00:41:05.551 20:32:57 keyring_file -- keyring/file.sh@114 -- # killprocess 373969 00:41:05.551 20:32:57 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 373969 ']' 00:41:05.551 20:32:57 keyring_file -- common/autotest_common.sh@950 -- # kill -0 373969 00:41:05.551 20:32:57 keyring_file -- common/autotest_common.sh@951 -- # uname 00:41:05.551 20:32:57 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:41:05.551 20:32:57 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 373969 00:41:05.551 20:32:57 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:41:05.551 20:32:57 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:41:05.551 20:32:57 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 373969' 00:41:05.551 killing process with pid 373969 00:41:05.551 20:32:57 keyring_file -- common/autotest_common.sh@965 -- # kill 373969 00:41:05.551 Received shutdown signal, test time was about 1.000000 seconds 00:41:05.551 00:41:05.551 Latency(us) 00:41:05.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:05.551 =================================================================================================================== 00:41:05.551 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:05.551 20:32:57 keyring_file -- common/autotest_common.sh@970 -- # wait 373969 00:41:05.551 20:32:58 keyring_file -- keyring/file.sh@117 -- # bperfpid=375848 00:41:05.551 20:32:58 keyring_file -- keyring/file.sh@119 -- # waitforlisten 375848 /var/tmp/bperf.sock 00:41:05.551 20:32:58 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 375848 ']' 00:41:05.551 20:32:58 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:05.551 20:32:58 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:41:05.551 20:32:58 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:41:05.551 20:32:58 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:05.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:05.551 20:32:58 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:41:05.551 20:32:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:05.551 20:32:58 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:41:05.551 "subsystems": [ 00:41:05.551 { 00:41:05.551 "subsystem": "keyring", 00:41:05.551 "config": [ 00:41:05.551 { 00:41:05.551 "method": "keyring_file_add_key", 00:41:05.551 "params": { 00:41:05.551 "name": "key0", 00:41:05.551 "path": "/tmp/tmp.yrEGwCDBfw" 00:41:05.551 } 00:41:05.551 }, 00:41:05.551 { 00:41:05.551 "method": "keyring_file_add_key", 00:41:05.551 "params": { 00:41:05.551 "name": "key1", 00:41:05.551 "path": "/tmp/tmp.F0sAuudZA8" 00:41:05.551 } 00:41:05.551 } 00:41:05.551 ] 00:41:05.551 }, 00:41:05.551 { 00:41:05.552 "subsystem": "iobuf", 00:41:05.552 "config": [ 00:41:05.552 { 00:41:05.552 "method": "iobuf_set_options", 00:41:05.552 "params": { 00:41:05.552 "small_pool_count": 8192, 00:41:05.552 "large_pool_count": 1024, 00:41:05.552 "small_bufsize": 8192, 00:41:05.552 "large_bufsize": 135168 00:41:05.552 } 00:41:05.552 } 00:41:05.552 ] 00:41:05.552 }, 00:41:05.552 { 00:41:05.552 "subsystem": "sock", 00:41:05.552 "config": [ 00:41:05.552 { 00:41:05.552 "method": "sock_impl_set_options", 00:41:05.552 "params": { 00:41:05.552 "impl_name": "posix", 00:41:05.552 "recv_buf_size": 2097152, 00:41:05.552 "send_buf_size": 2097152, 00:41:05.552 "enable_recv_pipe": true, 00:41:05.552 "enable_quickack": false, 00:41:05.552 "enable_placement_id": 0, 00:41:05.552 "enable_zerocopy_send_server": true, 00:41:05.552 "enable_zerocopy_send_client": false, 00:41:05.552 "zerocopy_threshold": 0, 00:41:05.552 "tls_version": 0, 00:41:05.552 "enable_ktls": false 00:41:05.552 } 00:41:05.552 }, 00:41:05.552 { 00:41:05.552 "method": "sock_impl_set_options", 00:41:05.552 "params": { 00:41:05.552 "impl_name": "ssl", 00:41:05.552 "recv_buf_size": 4096, 00:41:05.552 "send_buf_size": 4096, 00:41:05.552 "enable_recv_pipe": true, 00:41:05.552 "enable_quickack": false, 00:41:05.552 "enable_placement_id": 0, 00:41:05.552 "enable_zerocopy_send_server": true, 00:41:05.552 "enable_zerocopy_send_client": false, 00:41:05.552 "zerocopy_threshold": 0, 00:41:05.552 "tls_version": 0, 00:41:05.552 "enable_ktls": false 00:41:05.552 } 00:41:05.552 } 00:41:05.552 ] 00:41:05.552 }, 00:41:05.552 { 00:41:05.552 "subsystem": "vmd", 00:41:05.552 "config": [] 00:41:05.552 }, 00:41:05.552 { 00:41:05.552 "subsystem": "accel", 00:41:05.552 "config": [ 00:41:05.552 { 00:41:05.552 "method": "accel_set_options", 00:41:05.552 "params": { 00:41:05.552 "small_cache_size": 128, 00:41:05.552 "large_cache_size": 16, 00:41:05.552 "task_count": 2048, 00:41:05.552 "sequence_count": 2048, 00:41:05.552 "buf_count": 2048 00:41:05.552 } 00:41:05.552 } 00:41:05.552 ] 00:41:05.552 }, 00:41:05.552 { 00:41:05.552 "subsystem": "bdev", 00:41:05.552 "config": [ 00:41:05.552 { 00:41:05.552 "method": "bdev_set_options", 00:41:05.552 "params": { 00:41:05.552 "bdev_io_pool_size": 65535, 00:41:05.552 "bdev_io_cache_size": 256, 00:41:05.552 "bdev_auto_examine": true, 00:41:05.552 "iobuf_small_cache_size": 128, 00:41:05.552 "iobuf_large_cache_size": 16 00:41:05.552 } 00:41:05.552 }, 00:41:05.552 { 00:41:05.552 "method": "bdev_raid_set_options", 00:41:05.552 "params": { 00:41:05.552 "process_window_size_kb": 1024 00:41:05.552 } 00:41:05.552 }, 00:41:05.552 { 00:41:05.552 "method": "bdev_iscsi_set_options", 00:41:05.552 "params": { 00:41:05.552 "timeout_sec": 30 00:41:05.552 } 00:41:05.552 }, 00:41:05.552 { 00:41:05.552 "method": "bdev_nvme_set_options", 00:41:05.552 "params": { 00:41:05.552 "action_on_timeout": "none", 00:41:05.552 "timeout_us": 0, 00:41:05.552 "timeout_admin_us": 0, 00:41:05.552 "keep_alive_timeout_ms": 10000, 00:41:05.552 "arbitration_burst": 0, 00:41:05.552 "low_priority_weight": 0, 00:41:05.552 "medium_priority_weight": 0, 00:41:05.552 "high_priority_weight": 0, 00:41:05.552 "nvme_adminq_poll_period_us": 10000, 00:41:05.552 "nvme_ioq_poll_period_us": 0, 00:41:05.552 "io_queue_requests": 512, 00:41:05.552 "delay_cmd_submit": true, 00:41:05.552 "transport_retry_count": 4, 00:41:05.552 "bdev_retry_count": 3, 00:41:05.552 "transport_ack_timeout": 0, 00:41:05.552 "ctrlr_loss_timeout_sec": 0, 00:41:05.552 "reconnect_delay_sec": 0, 00:41:05.552 "fast_io_fail_timeout_sec": 0, 00:41:05.552 "disable_auto_failback": false, 00:41:05.552 "generate_uuids": false, 00:41:05.552 "transport_tos": 0, 00:41:05.552 "nvme_error_stat": false, 00:41:05.552 "rdma_srq_size": 0, 00:41:05.552 "io_path_stat": false, 00:41:05.552 "allow_accel_sequence": false, 00:41:05.552 "rdma_max_cq_size": 0, 00:41:05.552 "rdma_cm_event_timeout_ms": 0, 00:41:05.552 "dhchap_digests": [ 00:41:05.552 "sha256", 00:41:05.552 "sha384", 00:41:05.552 "sha512" 00:41:05.552 ], 00:41:05.552 "dhchap_dhgroups": [ 00:41:05.552 "null", 00:41:05.552 "ffdhe2048", 00:41:05.552 "ffdhe3072", 00:41:05.552 "ffdhe4096", 00:41:05.552 "ffdhe6144", 00:41:05.552 "ffdhe8192" 00:41:05.552 ] 00:41:05.552 } 00:41:05.552 }, 00:41:05.552 { 00:41:05.552 "method": "bdev_nvme_attach_controller", 00:41:05.552 "params": { 00:41:05.552 "name": "nvme0", 00:41:05.552 "trtype": "TCP", 00:41:05.552 "adrfam": "IPv4", 00:41:05.552 "traddr": "127.0.0.1", 00:41:05.552 "trsvcid": "4420", 00:41:05.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:05.552 "prchk_reftag": false, 00:41:05.552 "prchk_guard": false, 00:41:05.552 "ctrlr_loss_timeout_sec": 0, 00:41:05.552 "reconnect_delay_sec": 0, 00:41:05.552 "fast_io_fail_timeout_sec": 0, 00:41:05.552 "psk": "key0", 00:41:05.552 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:05.552 "hdgst": false, 00:41:05.552 "ddgst": false 00:41:05.552 } 00:41:05.552 }, 00:41:05.552 { 00:41:05.552 "method": "bdev_nvme_set_hotplug", 00:41:05.552 "params": { 00:41:05.552 "period_us": 100000, 00:41:05.552 "enable": false 00:41:05.552 } 00:41:05.552 }, 00:41:05.552 { 00:41:05.552 "method": "bdev_wait_for_examine" 00:41:05.552 } 00:41:05.552 ] 00:41:05.552 }, 00:41:05.552 { 00:41:05.552 "subsystem": "nbd", 00:41:05.552 "config": [] 00:41:05.552 } 00:41:05.552 ] 00:41:05.552 }' 00:41:05.813 [2024-05-15 20:32:58.084622] Starting SPDK v24.05-pre git sha1 40b11d962 / DPDK 23.11.0 initialization... 00:41:05.813 [2024-05-15 20:32:58.084688] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid375848 ] 00:41:05.813 EAL: No free 2048 kB hugepages reported on node 1 00:41:05.813 [2024-05-15 20:32:58.148840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:05.813 [2024-05-15 20:32:58.213500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:06.074 [2024-05-15 20:32:58.352115] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:06.645 20:32:58 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:41:06.645 20:32:58 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:41:06.645 20:32:58 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:41:06.645 20:32:58 keyring_file -- keyring/file.sh@120 -- # jq length 00:41:06.645 20:32:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:06.645 20:32:59 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:41:06.645 20:32:59 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:41:06.645 20:32:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:06.645 20:32:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:06.645 20:32:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:06.645 20:32:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:06.645 20:32:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:06.904 20:32:59 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:41:06.904 20:32:59 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:41:06.904 20:32:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:06.904 20:32:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:06.904 20:32:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:06.904 20:32:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:06.904 20:32:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:07.164 20:32:59 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:41:07.164 20:32:59 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:41:07.164 20:32:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:41:07.164 20:32:59 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:41:07.424 20:32:59 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:41:07.424 20:32:59 keyring_file -- keyring/file.sh@1 -- # cleanup 00:41:07.424 20:32:59 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.yrEGwCDBfw /tmp/tmp.F0sAuudZA8 00:41:07.424 20:32:59 keyring_file -- keyring/file.sh@20 -- # killprocess 375848 00:41:07.424 20:32:59 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 375848 ']' 00:41:07.424 20:32:59 keyring_file -- common/autotest_common.sh@950 -- # kill -0 375848 00:41:07.424 20:32:59 keyring_file -- common/autotest_common.sh@951 -- # uname 00:41:07.424 20:32:59 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:41:07.424 20:32:59 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 375848 00:41:07.424 20:32:59 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:41:07.424 20:32:59 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:41:07.424 20:32:59 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 375848' 00:41:07.424 killing process with pid 375848 00:41:07.424 20:32:59 keyring_file -- common/autotest_common.sh@965 -- # kill 375848 00:41:07.424 Received shutdown signal, test time was about 1.000000 seconds 00:41:07.424 00:41:07.424 Latency(us) 00:41:07.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:07.424 =================================================================================================================== 00:41:07.424 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:07.424 20:32:59 keyring_file -- common/autotest_common.sh@970 -- # wait 375848 00:41:07.684 20:32:59 keyring_file -- keyring/file.sh@21 -- # killprocess 373896 00:41:07.684 20:32:59 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 373896 ']' 00:41:07.684 20:32:59 keyring_file -- common/autotest_common.sh@950 -- # kill -0 373896 00:41:07.684 20:32:59 keyring_file -- common/autotest_common.sh@951 -- # uname 00:41:07.684 20:32:59 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:41:07.684 20:32:59 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 373896 00:41:07.684 20:33:00 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:41:07.684 20:33:00 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:41:07.684 20:33:00 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 373896' 00:41:07.684 killing process with pid 373896 00:41:07.684 20:33:00 keyring_file -- common/autotest_common.sh@965 -- # kill 373896 00:41:07.684 [2024-05-15 20:33:00.033807] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:41:07.684 [2024-05-15 20:33:00.033843] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:41:07.684 20:33:00 keyring_file -- common/autotest_common.sh@970 -- # wait 373896 00:41:07.945 00:41:07.945 real 0m12.763s 00:41:07.945 user 0m31.008s 00:41:07.945 sys 0m2.884s 00:41:07.945 20:33:00 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:41:07.945 20:33:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:07.945 ************************************ 00:41:07.945 END TEST keyring_file 00:41:07.945 ************************************ 00:41:07.945 20:33:00 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:41:07.945 20:33:00 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:41:07.945 20:33:00 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:41:07.945 20:33:00 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:41:07.945 20:33:00 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:41:07.945 20:33:00 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:41:07.945 20:33:00 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:41:07.945 20:33:00 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:41:07.945 20:33:00 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:41:07.945 20:33:00 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:41:07.945 20:33:00 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:41:07.945 20:33:00 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:41:07.945 20:33:00 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:41:07.945 20:33:00 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:41:07.945 20:33:00 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:41:07.945 20:33:00 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:41:07.945 20:33:00 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:41:07.945 20:33:00 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:41:07.945 20:33:00 -- common/autotest_common.sh@720 -- # xtrace_disable 00:41:07.945 20:33:00 -- common/autotest_common.sh@10 -- # set +x 00:41:07.945 20:33:00 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:41:07.945 20:33:00 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:41:07.945 20:33:00 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:41:07.945 20:33:00 -- common/autotest_common.sh@10 -- # set +x 00:41:16.083 INFO: APP EXITING 00:41:16.083 INFO: killing all VMs 00:41:16.083 INFO: killing vhost app 00:41:16.083 INFO: EXIT DONE 00:41:19.383 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:41:19.383 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:41:19.644 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:41:19.644 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:41:19.644 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:41:19.644 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:41:19.644 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:41:19.644 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:41:19.644 0000:65:00.0 (144d a80a): Already using the nvme driver 00:41:19.644 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:41:19.644 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:41:19.644 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:41:19.904 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:41:19.904 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:41:19.904 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:41:19.904 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:41:19.904 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:41:24.106 Cleaning 00:41:24.106 Removing: /var/run/dpdk/spdk0/config 00:41:24.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:41:24.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:41:24.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:41:24.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:41:24.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:41:24.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:41:24.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:41:24.106 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:41:24.106 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:41:24.106 Removing: /var/run/dpdk/spdk0/hugepage_info 00:41:24.106 Removing: /var/run/dpdk/spdk1/config 00:41:24.106 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:41:24.106 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:41:24.106 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:41:24.106 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:41:24.106 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:41:24.106 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:41:24.106 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:41:24.106 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:41:24.106 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:41:24.106 Removing: /var/run/dpdk/spdk1/hugepage_info 00:41:24.106 Removing: /var/run/dpdk/spdk1/mp_socket 00:41:24.106 Removing: /var/run/dpdk/spdk2/config 00:41:24.106 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:41:24.106 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:41:24.106 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:41:24.106 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:41:24.106 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:41:24.106 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:41:24.106 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:41:24.106 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:41:24.106 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:41:24.106 Removing: /var/run/dpdk/spdk2/hugepage_info 00:41:24.106 Removing: /var/run/dpdk/spdk3/config 00:41:24.106 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:41:24.106 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:41:24.106 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:41:24.106 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:41:24.106 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:41:24.106 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:41:24.106 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:41:24.106 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:41:24.106 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:41:24.106 Removing: /var/run/dpdk/spdk3/hugepage_info 00:41:24.106 Removing: /var/run/dpdk/spdk4/config 00:41:24.106 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:41:24.106 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:41:24.106 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:41:24.106 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:41:24.106 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:41:24.106 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:41:24.106 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:41:24.106 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:41:24.106 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:41:24.106 Removing: /var/run/dpdk/spdk4/hugepage_info 00:41:24.106 Removing: /dev/shm/bdev_svc_trace.1 00:41:24.106 Removing: /dev/shm/nvmf_trace.0 00:41:24.106 Removing: /dev/shm/spdk_tgt_trace.pid3995176 00:41:24.106 Removing: /var/run/dpdk/spdk0 00:41:24.106 Removing: /var/run/dpdk/spdk1 00:41:24.106 Removing: /var/run/dpdk/spdk2 00:41:24.107 Removing: /var/run/dpdk/spdk3 00:41:24.107 Removing: /var/run/dpdk/spdk4 00:41:24.107 Removing: /var/run/dpdk/spdk_pid100431 00:41:24.107 Removing: /var/run/dpdk/spdk_pid102446 00:41:24.107 Removing: /var/run/dpdk/spdk_pid103214 00:41:24.107 Removing: /var/run/dpdk/spdk_pid103869 00:41:24.107 Removing: /var/run/dpdk/spdk_pid106457 00:41:24.107 Removing: /var/run/dpdk/spdk_pid107087 00:41:24.107 Removing: /var/run/dpdk/spdk_pid107877 00:41:24.107 Removing: /var/run/dpdk/spdk_pid113434 00:41:24.107 Removing: /var/run/dpdk/spdk_pid120455 00:41:24.107 Removing: /var/run/dpdk/spdk_pid126143 00:41:24.107 Removing: /var/run/dpdk/spdk_pid13296 00:41:24.107 Removing: /var/run/dpdk/spdk_pid172778 00:41:24.107 Removing: /var/run/dpdk/spdk_pid178180 00:41:24.107 Removing: /var/run/dpdk/spdk_pid186044 00:41:24.107 Removing: /var/run/dpdk/spdk_pid187553 00:41:24.107 Removing: /var/run/dpdk/spdk_pid189268 00:41:24.107 Removing: /var/run/dpdk/spdk_pid194843 00:41:24.107 Removing: /var/run/dpdk/spdk_pid200403 00:41:24.107 Removing: /var/run/dpdk/spdk_pid210324 00:41:24.107 Removing: /var/run/dpdk/spdk_pid210326 00:41:24.107 Removing: /var/run/dpdk/spdk_pid215970 00:41:24.107 Removing: /var/run/dpdk/spdk_pid216117 00:41:24.107 Removing: /var/run/dpdk/spdk_pid216404 00:41:24.107 Removing: /var/run/dpdk/spdk_pid216841 00:41:24.107 Removing: /var/run/dpdk/spdk_pid216947 00:41:24.107 Removing: /var/run/dpdk/spdk_pid218374 00:41:24.107 Removing: /var/run/dpdk/spdk_pid220290 00:41:24.107 Removing: /var/run/dpdk/spdk_pid222139 00:41:24.107 Removing: /var/run/dpdk/spdk_pid224119 00:41:24.107 Removing: /var/run/dpdk/spdk_pid226228 00:41:24.107 Removing: /var/run/dpdk/spdk_pid228694 00:41:24.107 Removing: /var/run/dpdk/spdk_pid236226 00:41:24.107 Removing: /var/run/dpdk/spdk_pid236924 00:41:24.107 Removing: /var/run/dpdk/spdk_pid237783 00:41:24.107 Removing: /var/run/dpdk/spdk_pid238915 00:41:24.205 Cancelling nested steps due to timeout 00:41:24.208 Sending interrupt signal to process 00:41:24.404 Removing: /var/run/dpdk/spdk_pid245221 00:41:24.404 Removing: /var/run/dpdk/spdk_pid248346 00:41:24.404 Removing: /var/run/dpdk/spdk_pid24952 00:41:24.404 Removing: /var/run/dpdk/spdk_pid255517 00:41:24.404 Removing: /var/run/dpdk/spdk_pid262461 00:41:24.404 Removing: /var/run/dpdk/spdk_pid26957 00:41:24.404 Removing: /var/run/dpdk/spdk_pid273371 00:41:24.404 Removing: /var/run/dpdk/spdk_pid28051 00:41:24.404 Removing: /var/run/dpdk/spdk_pid283080 00:41:24.404 Removing: /var/run/dpdk/spdk_pid283085 00:41:24.404 Removing: /var/run/dpdk/spdk_pid307314 00:41:24.404 Removing: /var/run/dpdk/spdk_pid307852 00:41:24.404 Removing: /var/run/dpdk/spdk_pid308454 00:41:24.404 Removing: /var/run/dpdk/spdk_pid309112 00:41:24.404 Removing: /var/run/dpdk/spdk_pid310162 00:41:24.404 Removing: /var/run/dpdk/spdk_pid310694 00:41:24.404 Removing: /var/run/dpdk/spdk_pid311299 00:41:24.404 Removing: /var/run/dpdk/spdk_pid311900 00:41:24.404 Removing: /var/run/dpdk/spdk_pid317271 00:41:24.404 Removing: /var/run/dpdk/spdk_pid317603 00:41:24.404 Removing: /var/run/dpdk/spdk_pid325239 00:41:24.404 Removing: /var/run/dpdk/spdk_pid325386 00:41:24.404 Removing: /var/run/dpdk/spdk_pid328126 00:41:24.404 Removing: /var/run/dpdk/spdk_pid336290 00:41:24.404 Removing: /var/run/dpdk/spdk_pid336398 00:41:24.404 Removing: /var/run/dpdk/spdk_pid342893 00:41:24.404 Removing: /var/run/dpdk/spdk_pid345088 00:41:24.404 Removing: /var/run/dpdk/spdk_pid347372 00:41:24.404 Removing: /var/run/dpdk/spdk_pid348786 00:41:24.404 Removing: /var/run/dpdk/spdk_pid351005 00:41:24.404 Removing: /var/run/dpdk/spdk_pid352507 00:41:24.404 Removing: /var/run/dpdk/spdk_pid363371 00:41:24.404 Removing: /var/run/dpdk/spdk_pid364033 00:41:24.404 Removing: /var/run/dpdk/spdk_pid364602 00:41:24.404 Removing: /var/run/dpdk/spdk_pid367726 00:41:24.404 Removing: /var/run/dpdk/spdk_pid368155 00:41:24.404 Removing: /var/run/dpdk/spdk_pid368786 00:41:24.404 Removing: /var/run/dpdk/spdk_pid373896 00:41:24.404 Removing: /var/run/dpdk/spdk_pid373969 00:41:24.404 Removing: /var/run/dpdk/spdk_pid375848 00:41:24.404 Removing: /var/run/dpdk/spdk_pid3993571 00:41:24.404 Removing: /var/run/dpdk/spdk_pid3995176 00:41:24.404 Removing: /var/run/dpdk/spdk_pid3995886 00:41:24.404 Removing: /var/run/dpdk/spdk_pid3996926 00:41:24.404 Removing: /var/run/dpdk/spdk_pid3997266 00:41:24.404 Removing: /var/run/dpdk/spdk_pid3998332 00:41:24.404 Removing: /var/run/dpdk/spdk_pid3998667 00:41:24.404 Removing: /var/run/dpdk/spdk_pid3998841 00:41:24.404 Removing: /var/run/dpdk/spdk_pid3999981 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4000701 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4001078 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4001472 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4001874 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4002205 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4002373 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4002659 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4003039 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4004104 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4007676 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4008047 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4008410 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4008534 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4009125 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4009153 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4009822 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4009848 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4010213 00:41:24.404 Removing: /var/run/dpdk/spdk_pid4010502 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4010588 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4010922 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4011359 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4011713 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4012106 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4012367 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4012510 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4012570 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4012925 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4013275 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4013546 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4013733 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4014015 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4014364 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4014717 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4015029 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4015261 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4015476 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4015819 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4016169 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4016529 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4016755 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4016937 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4017268 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4017621 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4017977 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4018301 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4018505 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4018749 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4019162 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4024131 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4125949 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4131750 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4144072 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4151128 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4156512 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4157186 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4175839 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4176287 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4182509 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4190210 00:41:24.709 Removing: /var/run/dpdk/spdk_pid4193294 00:41:24.709 Removing: /var/run/dpdk/spdk_pid50872 00:41:24.709 Removing: /var/run/dpdk/spdk_pid55930 00:41:24.709 Removing: /var/run/dpdk/spdk_pid89525 00:41:24.709 Removing: /var/run/dpdk/spdk_pid95457 00:41:24.709 Removing: /var/run/dpdk/spdk_pid97355 00:41:24.709 Removing: /var/run/dpdk/spdk_pid99541 00:41:24.709 Removing: /var/run/dpdk/spdk_pid99706 00:41:24.709 Removing: /var/run/dpdk/spdk_pid99716 00:41:24.709 Removing: /var/run/dpdk/spdk_pid99934 00:41:24.709 Clean 00:41:24.970 20:33:17 -- common/autotest_common.sh@1447 -- # return 0 00:41:24.970 20:33:17 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:41:24.970 20:33:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:24.970 20:33:17 -- common/autotest_common.sh@10 -- # set +x 00:41:24.970 20:33:17 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:41:24.970 20:33:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:24.970 20:33:17 -- common/autotest_common.sh@10 -- # set +x 00:41:24.970 20:33:17 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:24.970 20:33:17 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:41:24.970 20:33:17 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:41:24.970 20:33:17 -- spdk/autotest.sh@387 -- # hash lcov 00:41:24.970 20:33:17 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:41:24.970 20:33:17 -- spdk/autotest.sh@389 -- # hostname 00:41:24.970 20:33:17 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:41:25.231 geninfo: WARNING: invalid characters removed from testname! 00:41:29.439 Terminated 00:41:29.447 script returned exit code 143 00:41:29.452 [Pipeline] } 00:41:29.471 [Pipeline] // stage 00:41:29.480 [Pipeline] } 00:41:29.501 [Pipeline] // timeout 00:41:29.509 [Pipeline] } 00:41:29.513 Timeout has been exceeded 00:41:29.513 org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: 7091c47b-ee47-4c6b-8ad5-3a1e39ad67b0 00:41:29.534 [Pipeline] // catchError 00:41:29.539 [Pipeline] } 00:41:29.555 [Pipeline] // wrap 00:41:29.561 [Pipeline] } 00:41:29.575 [Pipeline] // catchError 00:41:29.584 [Pipeline] stage 00:41:29.586 [Pipeline] { (Epilogue) 00:41:29.600 [Pipeline] catchError 00:41:29.602 [Pipeline] { 00:41:29.616 [Pipeline] echo 00:41:29.617 Cleanup processes 00:41:29.622 [Pipeline] sh 00:41:29.911 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:29.911 385294 /usr/bin/perl /usr/bin/lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:41:29.911 385324 /usr/bin/perl /usr/bin/geninfo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk --output-filename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info --test-name spdk-cyp-12 --quiet --no-external --rc genhtml_branch_coverage=1 --rc genhtml_legend=1 --rc genhtml_function_coverage=1 --rc lcov_function_coverage=1 --rc geninfo_all_blocks=1 --rc lcov_branch_coverage=1 00:41:29.911 387206 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:29.911 3931460 sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715795680 00:41:29.911 3931501 bash /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715795680 00:41:29.926 [Pipeline] sh 00:41:30.212 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:30.212 ++ grep -v 'sudo pgrep' 00:41:30.212 ++ awk '{print $1}' 00:41:30.212 + sudo kill -9 385294 385324 3931460 3931501 00:41:30.225 [Pipeline] sh 00:41:30.510 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:40.534 [Pipeline] sh 00:41:40.814 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:40.814 Artifacts sizes are good 00:41:40.828 [Pipeline] archiveArtifacts 00:41:40.835 Archiving artifacts 00:41:41.025 [Pipeline] sh 00:41:41.309 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:41.323 [Pipeline] cleanWs 00:41:41.332 [WS-CLEANUP] Deleting project workspace... 00:41:41.332 [WS-CLEANUP] Deferred wipeout is used... 00:41:41.339 [WS-CLEANUP] done 00:41:41.341 [Pipeline] } 00:41:41.359 [Pipeline] // catchError 00:41:41.368 [Pipeline] echo 00:41:41.370 Tests finished with errors. Please check the logs for more info. 00:41:41.373 [Pipeline] echo 00:41:41.374 Execution node will be rebooted. 00:41:41.388 [Pipeline] build 00:41:41.390 Scheduling project: reset-job 00:41:41.401 [Pipeline] sh 00:41:41.685 + logger -p user.info -t JENKINS-CI 00:41:41.695 [Pipeline] } 00:41:41.709 [Pipeline] // stage 00:41:41.714 [Pipeline] } 00:41:41.729 [Pipeline] // node 00:41:41.733 [Pipeline] End of Pipeline 00:41:41.762 Finished: ABORTED